[
https://issues.apache.org/jira/browse/PIG-2614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13510636#comment-13510636
]
Joseph Adler commented on PIG-2614:
-----------------------------------
I'd argue that it's not just true for different RecordReaders, but for
different business problems. There are cases where it's OK to run a job even if
50% of your records are bad, and cases where an error on 1 in 1,000,000,000,000
records is unacceptable...
How about this: I'll take the existing patch and modify it to apply to all
record readers (not just AvroStorage). I'll see if I can get this to work more
generally. I think it will be about the same amount of work to modify
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader as
it would be to modify the rewritten AvroStorage, so why not just tackle the
general problem?
> AvroStorage crashes on LOADING a single bad error
> -------------------------------------------------
>
> Key: PIG-2614
> URL: https://issues.apache.org/jira/browse/PIG-2614
> Project: Pig
> Issue Type: Bug
> Components: piggybank
> Affects Versions: 0.10.0, 0.11
> Reporter: Russell Jurney
> Assignee: Jonathan Coveney
> Labels: avro, avrostorage, bad, book, cutting, doug, for, my,
> pig, sadism
> Fix For: 0.11, 0.10.1
>
> Attachments: PIG-2614_0.patch, PIG-2614_1.patch, PIG-2614_2.patch,
> test_avro_files.tar.gz
>
>
> AvroStorage dies when a single bad record exists, such as one with missing
> fields. This is very bad on 'big data,' where bad records are inevitable.
> See discussion at
> http://www.quora.com/Big-Data/In-Big-Data-ETL-how-many-records-are-an-acceptable-loss
> for more theory.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira