[
https://issues.apache.org/jira/browse/BEAM-11587?focusedWorklogId=766865&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-766865
]
ASF GitHub Bot logged work on BEAM-11587:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 05/May/22 19:07
Start Date: 05/May/22 19:07
Worklog Time Spent: 10m
Work Description: TheNeuralBit commented on code in PR #17159:
URL: https://github.com/apache/beam/pull/17159#discussion_r866227685
##########
sdks/python/apache_beam/io/gcp/bigquery.py:
##########
@@ -2514,6 +2515,11 @@ def _get_pipeline_details(unused_elm):
**self._kwargs))
| _PassThroughThenCleanupTempDatasets(project_to_cleanup_pcoll))
+ def get_pcoll_from_schema(table_schema):
+ pcoll_val = apache_beam.io.gcp.bigquery_schema_tools.\
+ produce_pcoll_with_schema(table_schema)
+ return beam.Map(lambda values: pcoll_val(**values))
Review Comment:
You might need to a `beam.Map().with_output_types(pcoll_val)`, that's how
Beam knows the element type, which it then uses to decide on a coder to use. We
need it to choose to use SchemaCoder.
I suspect this is why the assertion is trying to encode elements with
PickleCoder
Issue Time Tracking
-------------------
Worklog Id: (was: 766865)
Time Spent: 6h 20m (was: 6h 10m)
> Support pd.read_gbq and DataFrame.to_gbq
> ----------------------------------------
>
> Key: BEAM-11587
> URL: https://issues.apache.org/jira/browse/BEAM-11587
> Project: Beam
> Issue Type: New Feature
> Components: dsl-dataframe, io-py-gcp, sdk-py-core
> Reporter: Brian Hulette
> Assignee: Svetak Vihaan Sundhar
> Priority: P3
> Labels: dataframe-api
> Time Spent: 6h 20m
> Remaining Estimate: 0h
>
> We should support
> [read_gbq|https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_gbq.html]
> andÂ
> [to_gbq|https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_gbq.html]
> in the DataFrame API when gcp extras are installed.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)