[ 
https://issues.apache.org/jira/browse/BEAM-14383?focusedWorklogId=766331&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-766331
 ]

ASF GitHub Bot logged work on BEAM-14383:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 04/May/22 23:16
            Start Date: 04/May/22 23:16
    Worklog Time Spent: 10m 
      Work Description: TheNeuralBit commented on PR #17517:
URL: https://github.com/apache/beam/pull/17517#issuecomment-1118017910

   Thanks @Firlej! I triggers a CI check that should exercise this new test.
   
   This looks good, I guess my only concern is this could be a breaking change 
for existing users, e.g. if they're unpacking the current result `destination, 
row = value`. I think that's OK as long as we document this as a breaking 
change in CHANGES.md, but I think @chamikaramj or @pabloem should make that 
call.




Issue Time Tracking
-------------------

    Worklog Id:     (was: 766331)
    Time Spent: 3h  (was: 2h 50m)

> Improve "FailedRows" errors returned by beam.io.WriteToBigQuery
> ---------------------------------------------------------------
>
>                 Key: BEAM-14383
>                 URL: https://issues.apache.org/jira/browse/BEAM-14383
>             Project: Beam
>          Issue Type: Improvement
>          Components: io-py-gcp
>            Reporter: Oskar Firlej
>            Priority: P2
>          Time Spent: 3h
>  Remaining Estimate: 0h
>
> `WriteToBigQuery` pipeline returns `errors` when trying to insert rows that 
> do not match the BigQuery table schema. `errors` is a dictionary that 
> cointains one `FailedRows` key. `FailedRows` is a list of tuples where each 
> tuple has two elements: BigQuery table name and the row that didn't match the 
> schema.
> This can be verified by running the `BigQueryIO deadletter pattern` 
> https://beam.apache.org/documentation/patterns/bigqueryio/
> Using this approach I can print the failed rows in a pipeline. When running 
> the job, logger simultaneously prints out the reason why the rows were 
> invalid. The reason should also be included in the tuple in addition to the 
> BigQuery table and the raw row. This way next pipeline could process both the 
> invalid row and the reason why it is invalid.
> During my reasearch i found a couple of alternate solutions, but i think they 
> are more complex than they need to be. Thats why i explored the beam source 
> code and found the solution to be an easy and simple change.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to