[
https://issues.apache.org/jira/browse/PIG-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Cheolsoo Park reassigned PIG-3059:
----------------------------------
Assignee: (was: Cheolsoo Park)
> Global configurable minimum 'bad record' thresholds
> ---------------------------------------------------
>
> Key: PIG-3059
> URL: https://issues.apache.org/jira/browse/PIG-3059
> Project: Pig
> Issue Type: New Feature
> Components: impl
> Affects Versions: 0.11
> Reporter: Russell Jurney
> Attachments: PIG-3059-2.patch, PIG-3059.patch,
> avro_test_files-2.tar.gz
>
>
> See PIG-2614.
> Pig dies when one record in a LOAD of a billion records fails to parse. This
> is almost certainly not the desired behavior. elephant-bird and some other
> storage UDFs have minimum thresholds in terms of percent and count that must
> be exceeded before a job will fail outright.
> We need these limits to be configurable for Pig, globally. I've come to
> realize what a major problem Pig's crashing on bad records is for new Pig
> users. I believe this feature can greatly improve Pig.
> An example of a config would look like:
> pig.storage.bad.record.threshold=0.01
> pig.storage.bad.record.min=100
> A thorough discussion of this issue is available here:
> http://www.quora.com/Big-Data/In-Big-Data-ETL-how-many-records-are-an-acceptable-loss
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)