[ 
https://issues.apache.org/jira/browse/BEAM-10634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17476152#comment-17476152
 ] 

Joachim Isaksson edited comment on BEAM-10634 at 1/14/22, 1:51 PM:
-------------------------------------------------------------------

This bug is set as resolved, but the problem remains as far as I can tell. A 
too large insert will retry locally, without asking the retry policy whether to 
retry. 

That is, even a "retry never" policy will retry (as far as I can tell forever) 
without emitting any errors from the Bigquery.Write operator.

 


was (Author: joachim isaksson):
This bug is set as resolved, but the problem remains as far as I can tell. A 
too large insert will retry locally forever, without asking the retry policy 
whether to retry. 

That is, even a "retry never" policy will retry (as far as I can tell forever) 
without emitting any errors from the Bigquery.Write operator.

 

> BQ InsertAll retries forever on unrecoverable errors
> ----------------------------------------------------
>
>                 Key: BEAM-10634
>                 URL: https://issues.apache.org/jira/browse/BEAM-10634
>             Project: Beam
>          Issue Type: Bug
>          Components: io-java-gcp
>            Reporter: Udi Meiri
>            Assignee: Heejong Lee
>            Priority: P1
>             Fix For: Missing
>
>
> The current code attempts to handle quota exceeded errors, but it catches all 
> errors.
> For example, this error will be retried forever even though it will always 
> fail:
> {code}
> BigQuery insertAll error, retrying: Request payload size exceeds the limit: 
> 10485760 bytes.
> {code}
> Assigning to Heejong since the logic for this was last changed here: 
> https://github.com/apache/beam/pull/7189/files



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to