[ 
https://issues.apache.org/jira/browse/BEAM-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Hu reopened BEAM-14146:
--------------------------

Reopen as the issue is still around. Draining the job an error repeatedly 
appears from the job logs:

{code}
...
content <{ "error": { "code": 400, "message": "Load configuration must specify 
at least one source URI", "errors": [ { "message": "Load configuration must 
specify at least one source URI", "domain": "global", "reason": "invalid" } ], 
"status": "INVALID_ARGUMENT" } } > [while running 'Write to BigQuery with 
file_loads/BigQueryBatchFileLoads/TriggerLoadJobsWithTempTables/ParDo(TriggerLoadJobs)-ptransform-71']
{code}

There are lines in the worker logs:
{code}
Warning
2022-05-31T17:28:17.354806423ZBoth source URIs and source stream are not 
provided. BigQuery load job will not load any data.
{code}
Are related to the change of https://github.com/apache/beam/pull/17566 . Now 
the job no longer fail but the drain never completes.

> Python Streaming job failing to drain with BigQueryIO write errors
> ------------------------------------------------------------------
>
>                 Key: BEAM-14146
>                 URL: https://issues.apache.org/jira/browse/BEAM-14146
>             Project: Beam
>          Issue Type: Bug
>          Components: io-py-gcp, sdk-py-core
>    Affects Versions: 2.37.0
>            Reporter: Rahul Iyer
>            Assignee: Heejong Lee
>            Priority: P1
>             Fix For: 2.39.0
>
>          Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We have a Python Streaming Dataflow job that writes to BigQuery using the 
> {{FILE_LOADS}} method and {{auto_sharding}} enabled. When we try to drain the 
> job it fails with the following error,
> {code:python}
> "/usr/local/lib/python3.7/site-packages/apache_beam/io/gcp/bigquery_tools.py",
>  line 1000, in perform_load_job ValueError: Either a non-empty list of 
> fully-qualified source URIs must be provided via the source_uris parameter or 
> an open file object must be provided via the source_stream parameter.
> {code}
> Our {{WriteToBigQuery}} configuration,
> {code:python}
> beam.io.WriteToBigQuery(
>   table=options.output_table,
>   schema=bq_schema,
>   create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
>   write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
>   insert_retry_strategy=RetryStrategy.RETRY_ON_TRANSIENT_ERROR,
>   method=beam.io.WriteToBigQuery.Method.FILE_LOADS,
>   additional_bq_parameters={
>     "timePartitioning": {
>       "type": "HOUR",
>       "field": "bq_insert_timestamp",
>     },
>     "schemaUpdateOptions": ["ALLOW_FIELD_ADDITION", "ALLOW_FIELD_RELAXATION"],
>   },
>   triggering_frequency=120,
>   with_auto_sharding=True,
> )
> {code}
> We are also noticing that the job only fails to drain when there are actual 
> schema updates. If there are no schema updates the job drains without the 
> above error.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to