carlpayne opened a new issue, #24090:
URL: https://github.com/apache/beam/issues/24090

   ### What would you like to happen?
   
   Currently, when BigQueryIO fails to write to BigQuery, we get back a 
`PCollection<BigQueryInsertError>`via `getFailedInsertsWithErr` (or a 
`PCollection<BigQueryStorageApiInsertError>` if using 
`getFailedStorageApiInserts`), which provides us the `TableRow` for each 
failure. 
   
   However, it would also be very useful to have access to the original input 
data and not just the transformed `TableRow`. In our use case, we stream Avro 
data from Kafka to BigQuery, so the input for `BigQueryIO.Write` is a 
`KafkaRecord<String, byte[]>`, which we transform into a `TableRow` via 
`withFormatFunction`. What we'd like to be able to do is write each failed 
insert back to Kafka (into a DLQ topic) so that we can reprocess it later on, 
however the only way we can currently achieve this is to convert the `TableRow` 
back into a `KafkaRecord`, which runs the risk of losing/transforming the 
original data during the conversion process.
   
   One possible workaround we've explored is joining the input `PCollection` 
containing the Kafka data with the failed inserts via some shared ID, so that 
we can get back the original messages. The main issue with this is that errors 
can sometimes take many hours to be visible via `getFailedStorageApiInserts` 
(due to https://github.com/apache/beam/issues/23291), so we would need to 
buffer many millions of records to cover this time window, which isn't feasible 
in our case.
   
   ### Issue Priority
   
   Priority: 3
   
   ### Issue Component
   
   Component: io-java-gcp


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to