[
https://issues.apache.org/jira/browse/HADOOP-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12577445#action_12577445
]
Devaraj Das commented on HADOOP-153:
------------------------------------
Enis, agree that for the Java tasks case we could get the offending record
immediately in the Child process. The problem here is that with things like
Pipes apps (where the Java task spawns another child process from within), the
record number at which the exception happened is tricky to get since the
exception was really encountered in the Pipes process (this doesn't include the
exception that we might encounter while reading the input since that happens in
the Java parent task and we can catch those immediately).
> skip records that throw exceptions
> ----------------------------------
>
> Key: HADOOP-153
> URL: https://issues.apache.org/jira/browse/HADOOP-153
> Project: Hadoop Core
> Issue Type: New Feature
> Components: mapred
> Affects Versions: 0.2.0
> Reporter: Doug Cutting
> Assignee: Devaraj Das
> Fix For: 0.17.0
>
>
> MapReduce should skip records that throw exceptions.
> If the exception is thrown under RecordReader.next() then RecordReader
> implementations should automatically skip to the start of a subsequent record.
> Exceptions in map and reduce implementations can simply be logged, unless
> they happen under RecordWriter.write(). Cancelling partial output could be
> hard. So such output errors will still result in task failure.
> This behaviour should be optional, but enabled by default. A count of errors
> per task and job should be maintained and displayed in the web ui. Perhaps
> if some percentage of records (>50%?) result in exceptions then the task
> should fail. This would stop jobs early that are misconfigured or have buggy
> code.
> Thoughts?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.