[
https://issues.apache.org/jira/browse/PIG-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Cheolsoo Park updated PIG-3059:
-------------------------------
Fix Version/s: (was: 0.12)
> Global configurable minimum 'bad record' thresholds
> ---------------------------------------------------
>
> Key: PIG-3059
> URL: https://issues.apache.org/jira/browse/PIG-3059
> Project: Pig
> Issue Type: New Feature
> Components: impl
> Affects Versions: 0.11
> Reporter: Russell Jurney
> Assignee: Cheolsoo Park
> Attachments: avro_test_files-2.tar.gz, PIG-3059-2.patch,
> PIG-3059.patch
>
>
> See PIG-2614.
> Pig dies when one record in a LOAD of a billion records fails to parse. This
> is almost certainly not the desired behavior. elephant-bird and some other
> storage UDFs have minimum thresholds in terms of percent and count that must
> be exceeded before a job will fail outright.
> We need these limits to be configurable for Pig, globally. I've come to
> realize what a major problem Pig's crashing on bad records is for new Pig
> users. I believe this feature can greatly improve Pig.
> An example of a config would look like:
> pig.storage.bad.record.threshold=0.01
> pig.storage.bad.record.min=100
> A thorough discussion of this issue is available here:
> http://www.quora.com/Big-Data/In-Big-Data-ETL-how-many-records-are-an-acceptable-loss
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira