[
https://issues.apache.org/jira/browse/BEAM-11277?focusedWorklogId=565019&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-565019
]
ASF GitHub Bot logged work on BEAM-11277:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 11/Mar/21 23:46
Start Date: 11/Mar/21 23:46
Worklog Time Spent: 10m
Work Description: pabloem commented on a change in pull request #14113:
URL: https://github.com/apache/beam/pull/14113#discussion_r592802990
##########
File path: sdks/python/apache_beam/io/gcp/bigquery_file_loads.py
##########
@@ -857,6 +972,8 @@ def _load_data(
of the load jobs would fail but not other. If any of them fails, then
copy jobs are not triggered.
"""
+ singleton_pc = p | "ImpulseLoadData" >> beam.Create([None])
Review comment:
It'll be best to have separate singletons for each of these paths. The
issue was because by having one single PCollection reused in all these paths
was concentrating all of these various paths into a single stage with ~10 side
inputs, and this complicates firing of triggers for the stage, because we need
to wait for ~10 stages to advance their watermarks to start the 'singleton'
stage.
If we have multiple singleton pcollections, then we have multiple stages
with only one side input each.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 565019)
Time Spent: 6h 10m (was: 6h)
> WriteToBigQuery with batch file loads does not respect schema update options
> when there are multiple load jobs
> --------------------------------------------------------------------------------------------------------------
>
> Key: BEAM-11277
> URL: https://issues.apache.org/jira/browse/BEAM-11277
> Project: Beam
> Issue Type: Bug
> Components: io-py-gcp, runner-dataflow
> Affects Versions: 2.21.0, 2.24.0, 2.25.0, 2.28.0
> Reporter: Chun Yang
> Assignee: Chun Yang
> Priority: P2
> Attachments: repro.py
>
> Time Spent: 6h 10m
> Remaining Estimate: 0h
>
> When multiple load jobs are needed to write data to a destination table,
> e.g., when the data is spread over more than
> [10,000|https://cloud.google.com/bigquery/quotas#load_jobs] URIs,
> WriteToBigQuery in FILE_LOADS mode will write data into temporary tables and
> then copy the temporary tables into the destination table.
> When WriteToBigQuery is used with
> {{write_disposition=BigQueryDisposition.WRITE_APPEND}} and
> {{additional_bq_parameters=\{"schemaUpdateOptions":
> ["ALLOW_FIELD_ADDITION"]\}}}, the schema update options are not respected by
> the jobs that copy data from temporary tables into the destination table. The
> effect is that for small jobs (<10K source URIs), schema field addition is
> allowed, however, if the job is scaled to >10K source URIs, then schema field
> addition will fail with an error such as:
> {code:none}Provided Schema does not match Table project:dataset.table. Cannot
> add fields (field: field_name){code}
> I've been able to reproduce this issue with Python 3.7 and DataflowRunner on
> Beam 2.21.0 and Beam 2.25.0. I could not reproduce the issue with
> DirectRunner. A minimal reproducible example is attached.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)