vachan-shetty commented on a change in pull request #15185:
URL: https://github.com/apache/beam/pull/15185#discussion_r690598911
##########
File path: sdks/python/apache_beam/io/gcp/bigquery.py
##########
@@ -1841,15 +2120,29 @@ class ReadFromBigQuery(PTransform):
on GCS, and then reads from each produced file. File format is Avro by
default.
+ .. warning::
+ DATETIME columns are parsed as strings in the fastavro library. As a
+ result, such columns will be converted to Python strings instead of
native
+ Python DATETIME types.
+
Args:
+ method: The method to use to read from BigQuery. It may be EXPORT or
+ DIRECT_READ. EXPORT invokes a BigQuery export request
+ (https://cloud.google.com/bigquery/docs/exporting-data). DIRECT_READ
reads
+ directly from BigQuery storage using the BigQuery Read API
+ (https://cloud.google.com/bigquery/docs/reference/storage). If
+ unspecified, the default is currently EXPORT.
+ use_fastavro_for_direct_read (bool): If method is `DIRECT_READ` and
Review comment:
I have updated the documentation to indicate that at present only
`EXPORT` can be used to read `query` results. As far as the
`use_fastavro_for_direct_read` parameter is concerned it defaults to `True` so
users don't need to specify it. As @kmjung mentioned above we will (mostly)
remove support for the non-fastavro path from both the Avro and storage API
stream sources at the same time.
There are also two work items that will be addressed in follow-up PRs:
1. Supporting query sources in `RFBQ` while using the `DIRECT_READ` method.
2. Adding integration/performance tests for all of these changes.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]