[
https://issues.apache.org/jira/browse/BEAM-11587?focusedWorklogId=764311&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-764311
]
ASF GitHub Bot logged work on BEAM-11587:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 29/Apr/22 15:49
Start Date: 29/Apr/22 15:49
Worklog Time Spent: 10m
Work Description: TheNeuralBit commented on code in PR #17159:
URL: https://github.com/apache/beam/pull/17159#discussion_r861932135
##########
sdks/python/apache_beam/io/gcp/bigquery.py:
##########
@@ -2514,6 +2515,11 @@ def _get_pipeline_details(unused_elm):
**self._kwargs))
| _PassThroughThenCleanupTempDatasets(project_to_cleanup_pcoll))
+ def get_pcoll_from_schema(table_schema):
+ pcoll_val = apache_beam.io.gcp.bigquery_schema_tools.\
+ produce_pcoll_with_schema(table_schema)
+ return beam.Map(lambda values: pcoll_val(**values))
Review Comment:
The integration test should build a pipeline using this method and run it,
similar to this test:
https://github.com/apache/beam/blob/8c4a056a63d92776ae9d6be726b37d789486afbd/sdks/python/apache_beam/io/gcp/bigquery_read_it_test.py#L350-L356
Does that make sense?
Issue Time Tracking
-------------------
Worklog Id: (was: 764311)
Time Spent: 4h 50m (was: 4h 40m)
> Support pd.read_gbq and DataFrame.to_gbq
> ----------------------------------------
>
> Key: BEAM-11587
> URL: https://issues.apache.org/jira/browse/BEAM-11587
> Project: Beam
> Issue Type: New Feature
> Components: dsl-dataframe, io-py-gcp, sdk-py-core
> Reporter: Brian Hulette
> Assignee: Svetak Vihaan Sundhar
> Priority: P3
> Labels: dataframe-api
> Time Spent: 4h 50m
> Remaining Estimate: 0h
>
> We should support
> [read_gbq|https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_gbq.html]
> andÂ
> [to_gbq|https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_gbq.html]
> in the DataFrame API when gcp extras are installed.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)