[
https://issues.apache.org/jira/browse/BEAM-13391?focusedWorklogId=696946&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-696946
]
ASF GitHub Bot logged work on BEAM-13391:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 15/Dec/21 23:25
Start Date: 15/Dec/21 23:25
Worklog Time Spent: 10m
Work Description: satybald commented on a change in pull request #16156:
URL: https://github.com/apache/beam/pull/16156#discussion_r770113655
##########
File path: sdks/python/apache_beam/io/gcp/bigquery_write_it_test.py
##########
@@ -412,11 +420,103 @@ def
test_big_query_write_temp_table_append_schema_update(self):
| 'write' >> beam.io.WriteToBigQuery(
table_id,
schema=table_schema,
- write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
+ write_disposition=BigQueryDisposition.WRITE_APPEND,
max_file_size=1, # bytes
method=beam.io.WriteToBigQuery.Method.FILE_LOADS,
additional_bq_parameters={
- 'schemaUpdateOptions': ['ALLOW_FIELD_ADDITION']}))
+ 'schemaUpdateOptions': ['ALLOW_FIELD_ADDITION']},
+ temp_file_format=file_format))
+
+ @pytest.mark.it_postcommit
+ @parameterized.expand([
+ param(file_format=FileFormat.AVRO),
+ param(file_format=FileFormat.JSON),
+ param(file_format=None),
+ ])
+ @mock.patch(
+ "apache_beam.io.gcp.bigquery_file_loads._DEFAULT_MAX_FILE_SIZE", new=1)
+ @mock.patch(
+ "apache_beam.io.gcp.bigquery_file_loads._MAXIMUM_SOURCE_URIS", new=1)
+ def test_append_schema_change_with_temporary_tables(self, file_format):
Review comment:
The first tests schema addition, the second - field relaxation. Yep, I
can combine those two tests together
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 696946)
Remaining Estimate: 45h 40m (was: 45h 50m)
Time Spent: 2h 20m (was: 2h 10m)
> The Apache Avro library failed to parse the header with the following error:
> EOF reached
> ----------------------------------------------------------------------------------------
>
> Key: BEAM-13391
> URL: https://issues.apache.org/jira/browse/BEAM-13391
> Project: Beam
> Issue Type: Bug
> Components: io-py-gcp
> Affects Versions: 2.34.0
> Reporter: Sayat Satybaldiyev
> Priority: P2
> Original Estimate: 48h
> Time Spent: 2h 20m
> Remaining Estimate: 45h 40m
>
> When WriteToBigQuery is done with AVRO file loads, and the load is triggering
> a schema [modification
> job|https://github.com/apache/beam/blob/ad56edbf1fa2e6c19fea23555ecfad015f67174d/sdks/python/apache_beam/io/gcp/bigquery_file_loads.py#L323-L340],
> then following error is triggered:
> {code:java}
> E RuntimeError: BigQuery job
> beam_bq_job_LOAD_AUTOMATIC_JOB_NAME_SCHEMA_MOD_STEP_171_736013176f6ea340cfd22b9f5b575c81_643a0ee608c04300bfbe67c8db85b2b0
> failed. Error Result: <ErrorProto
> E message: 'Error while reading data, error message: The Apache Avro
> library failed to parse the header with the following error: EOF reached'
> E reason: 'invalid'> [while running
> 'WriteToBigQuery/BigQueryBatchFileLoads/WaitForSchemaModJobs']
> {code}
--
This message was sent by Atlassian Jira
(v8.20.1#820001)