[ 
https://issues.apache.org/jira/browse/BEAM-7300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16867312#comment-16867312
 ] 

Evgeny commented on BEAM-7300:
------------------------------

I mean that if you use stream inserts you can get WriteResult object back and 
handle all errors, and if you use batch and one of the rows fails, you can 
loose all the batch

> Writing in batch to BigQuery may lead to data loss
> --------------------------------------------------
>
>                 Key: BEAM-7300
>                 URL: https://issues.apache.org/jira/browse/BEAM-7300
>             Project: Beam
>          Issue Type: Improvement
>          Components: io-java-gcp
>    Affects Versions: 2.12.0
>            Reporter: Evgeny
>            Priority: Major
>
> There is no way to avoid data loss during batch insert into BigQuery when one 
> of the data rows is corrupted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to