[ https://issues.apache.org/jira/browse/BEAM-190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239815#comment-15239815 ]
Luke Cwik commented on BEAM-190: -------------------------------- >From Dan Halperin: I thought that we were under the impression that rather than losing data it's likely better to update your pipeline to handle these? > Dead-letter drop for bad BigQuery records > ----------------------------------------- > > Key: BEAM-190 > URL: https://issues.apache.org/jira/browse/BEAM-190 > Project: Beam > Issue Type: Bug > Components: runner-core > Reporter: Mark Shields > Assignee: Frances Perry > > If a BigQuery insert fails for data-specific rather than structural reasons > (eg cannot parse a date) then the bundle will be retried indefinitely, first > by BigQueryTableInserter.insertAll then by the overall production retry logic > of the underlying runner. > Better would be to allow customer to specify a dead-letter store for records > such as those so that overall processing can continue while bad records are > quarantined. -- This message was sent by Atlassian JIRA (v6.3.4#6332)