[
https://issues.apache.org/jira/browse/HADOOP-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12607902#action_12607902
]
Sharad Agarwal commented on HADOOP-153:
---------------------------------------
I assume there would be use cases in which a higher bad records percentage
could be acceptable say jobs related to log mining, crawling, indexing,
analysis for research purposes etc.
That said, even 1 % of acceptable bad records could turn out to be a big no
given that hadoop jobs are generally suitable for huge data. Piggybacking on
jobtracker's memory for this may not be a good idea. Transferring the whole
list over rpc would not be ideal.
Also, the need to persist may arise due to HADOOP-3245.
> skip records that throw exceptions
> ----------------------------------
>
> Key: HADOOP-153
> URL: https://issues.apache.org/jira/browse/HADOOP-153
> Project: Hadoop Core
> Issue Type: New Feature
> Components: mapred
> Affects Versions: 0.2.0
> Reporter: Doug Cutting
> Assignee: Sharad Agarwal
> Attachments: skipRecords_wip1.patch
>
>
> MapReduce should skip records that throw exceptions.
> If the exception is thrown under RecordReader.next() then RecordReader
> implementations should automatically skip to the start of a subsequent record.
> Exceptions in map and reduce implementations can simply be logged, unless
> they happen under RecordWriter.write(). Cancelling partial output could be
> hard. So such output errors will still result in task failure.
> This behaviour should be optional, but enabled by default. A count of errors
> per task and job should be maintained and displayed in the web ui. Perhaps
> if some percentage of records (>50%?) result in exceptions then the task
> should fail. This would stop jobs early that are misconfigured or have buggy
> code.
> Thoughts?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.