Hi,

I'm using BigQueryIO to write the output of an unbounded streaming job to
BigQuery.

In the case that an element in the stream cannot be written to BigQuery,
the BigQueryIO seems to have some default retry logic which retries the
write a few times. However, if the write fails repeatedly, it seems to
cause the whole pipeline to halt.

How can I configure beam so that if writing an element fails a few times,
it simply gives up on writing that element and moves on without affecting
the pipeline?

Thanks for any advice,
Josh

Reply via email to