[ 
https://issues.apache.org/jira/browse/BEAM-14146?focusedWorklogId=766920&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-766920
 ]

ASF GitHub Bot logged work on BEAM-14146:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 05/May/22 20:50
            Start Date: 05/May/22 20:50
    Worklog Time Spent: 10m 
      Work Description: ihji commented on PR #17566:
URL: https://github.com/apache/beam/pull/17566#issuecomment-1119034702

   R: @pabloem 
   CC: @chunyang @chamikaramj 




Issue Time Tracking
-------------------

    Worklog Id:     (was: 766920)
    Time Spent: 20m  (was: 10m)

> Python Streaming job failing to drain with BigQueryIO write errors
> ------------------------------------------------------------------
>
>                 Key: BEAM-14146
>                 URL: https://issues.apache.org/jira/browse/BEAM-14146
>             Project: Beam
>          Issue Type: Bug
>          Components: io-py-gcp, sdk-py-core
>    Affects Versions: 2.37.0
>            Reporter: Rahul Iyer
>            Assignee: Heejong Lee
>            Priority: P1
>             Fix For: 2.39.0
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> We have a Python Streaming Dataflow job that writes to BigQuery using the 
> {{FILE_LOADS}} method and {{auto_sharding}} enabled. When we try to drain the 
> job it fails with the following error,
> {code:python}
> "/usr/local/lib/python3.7/site-packages/apache_beam/io/gcp/bigquery_tools.py",
>  line 1000, in perform_load_job ValueError: Either a non-empty list of 
> fully-qualified source URIs must be provided via the source_uris parameter or 
> an open file object must be provided via the source_stream parameter.
> {code}
> Our {{WriteToBigQuery}} configuration,
> {code:python}
> beam.io.WriteToBigQuery(
>   table=options.output_table,
>   schema=bq_schema,
>   create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
>   write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
>   insert_retry_strategy=RetryStrategy.RETRY_ON_TRANSIENT_ERROR,
>   method=beam.io.WriteToBigQuery.Method.FILE_LOADS,
>   additional_bq_parameters={
>     "timePartitioning": {
>       "type": "HOUR",
>       "field": "bq_insert_timestamp",
>     },
>     "schemaUpdateOptions": ["ALLOW_FIELD_ADDITION", "ALLOW_FIELD_RELAXATION"],
>   },
>   triggering_frequency=120,
>   with_auto_sharding=True,
> )
> {code}
> We are also noticing that the job only fails to drain when there are actual 
> schema updates. If there are no schema updates the job drains without the 
> above error.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to