[ 
https://issues.apache.org/jira/browse/BEAM-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17533645#comment-17533645
 ] 

Johan Brodin commented on BEAM-14146:
-------------------------------------

Hi! I am seeing similar issues for the java SDK when draining a job using the 
new BigQuery insert method, anyone else seeing the same thing? Could it have 
been introduced at the same time? That exception were thrown during drain time 
rather than logging like here?

> Python Streaming job failing to drain with BigQueryIO write errors
> ------------------------------------------------------------------
>
>                 Key: BEAM-14146
>                 URL: https://issues.apache.org/jira/browse/BEAM-14146
>             Project: Beam
>          Issue Type: Bug
>          Components: io-py-gcp, sdk-py-core
>    Affects Versions: 2.37.0
>            Reporter: Rahul Iyer
>            Assignee: Heejong Lee
>            Priority: P1
>             Fix For: 2.39.0
>
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We have a Python Streaming Dataflow job that writes to BigQuery using the 
> {{FILE_LOADS}} method and {{auto_sharding}} enabled. When we try to drain the 
> job it fails with the following error,
> {code:python}
> "/usr/local/lib/python3.7/site-packages/apache_beam/io/gcp/bigquery_tools.py",
>  line 1000, in perform_load_job ValueError: Either a non-empty list of 
> fully-qualified source URIs must be provided via the source_uris parameter or 
> an open file object must be provided via the source_stream parameter.
> {code}
> Our {{WriteToBigQuery}} configuration,
> {code:python}
> beam.io.WriteToBigQuery(
>   table=options.output_table,
>   schema=bq_schema,
>   create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
>   write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
>   insert_retry_strategy=RetryStrategy.RETRY_ON_TRANSIENT_ERROR,
>   method=beam.io.WriteToBigQuery.Method.FILE_LOADS,
>   additional_bq_parameters={
>     "timePartitioning": {
>       "type": "HOUR",
>       "field": "bq_insert_timestamp",
>     },
>     "schemaUpdateOptions": ["ALLOW_FIELD_ADDITION", "ALLOW_FIELD_RELAXATION"],
>   },
>   triggering_frequency=120,
>   with_auto_sharding=True,
> )
> {code}
> We are also noticing that the job only fails to drain when there are actual 
> schema updates. If there are no schema updates the job drains without the 
> above error.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to