[
https://issues.apache.org/jira/browse/BEAM-8910?focusedWorklogId=426670&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-426670
]
ASF GitHub Bot logged work on BEAM-8910:
----------------------------------------
Author: ASF GitHub Bot
Created on: 23/Apr/20 17:59
Start Date: 23/Apr/20 17:59
Worklog Time Spent: 10m
Work Description: tvalentyn commented on issue #11086:
URL: https://github.com/apache/beam/pull/11086#issuecomment-618553284
Thanks Pablo.
1. Can we take another look at a big comment at the beginning of bigquery.py
and see if it needs an update. It sounds like we now have two ways of reading
and writing to BigQuery. Can we add a guidance to users when to use which, and
what they will need to change when they switch from one to another?
2. The same comment says: BigQuery IO requires values of BYTES datatype to
be encoded using base64
encoding when writing to BigQuery. When bytes are read from BigQuery they are
returned as base64-encoded bytes. Does this apply to the new source? Sounds
like not, so please clarify.
3. What is the test story for BQ IO? I see several branches here: natve vs
non-native IO, Avro vs Json. Which combinations are tested?
4. Big Query tests are one of the major source of [postcommit
flakiness](https://issues.apache.org/jira/issues/?jql=text~BigQuery%20AND%20summary~flaky%20AND%20status%3DOpen%20AND%20summary~Test)?
What are the plans to address that?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 426670)
Time Spent: 9h (was: 8h 50m)
> Use AVRO instead of JSON in BigQuery bounded source.
> ----------------------------------------------------
>
> Key: BEAM-8910
> URL: https://issues.apache.org/jira/browse/BEAM-8910
> Project: Beam
> Issue Type: Improvement
> Components: sdk-py-core
> Reporter: Kamil Wasilewski
> Assignee: Pablo Estrada
> Priority: Minor
> Time Spent: 9h
> Remaining Estimate: 0h
>
> The proposed BigQuery bounded source in Python SDK (see PR:
> [https://github.com/apache/beam/pull/9772)] uses a BigQuery export job to
> take a snapshot of the table and read from each produced JSON file. A
> performance improvement can be gain by switching to AVRO instead.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)