udim commented on a change in pull request #12280:
URL: https://github.com/apache/beam/pull/12280#discussion_r459099536
##########
File path: sdks/python/apache_beam/io/gcp/bigquery.py
##########
@@ -65,18 +65,19 @@
table. If specified, the result obtained by executing the specified query will
be used as the data of the input transform.::
- query_results = pipeline | beam.io.Read(beam.io.BigQuerySource(
- query='SELECT year, mean_temp FROM samples.weather_stations'))
+ query_results = pipeline | beam.io.gcp.bigquery.ReadFromBigQuery(
+ query='SELECT year, mean_temp FROM samples.weather_stations')
When creating a BigQuery input transform, users should provide either a query
or a table. Pipeline construction will fail with a validation error if neither
or both are specified.
-When reading from BigQuery using `BigQuerySource`, bytes are returned as
-base64-encoded bytes. When reading via `ReadFromBigQuery`, bytes are returned
-as bytes without base64 encoding. This is due to the fact that ReadFromBigQuery
-uses Avro expors by default. To get base64-encoded bytes, you can use the flag
-`use_json_exports` to export data as JSON, and receive base64-encoded bytes.
+When reading via `ReadFromBigQuery`, bytes are returned decoded as bytes.
+This is due to the fact that ReadFromBigQuery uses Avro expors by default.
Review comment:
```suggestion
This is due to the fact that ReadFromBigQuery uses Avro exports by default.
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]