[
https://issues.apache.org/jira/browse/BEAM-11705?focusedWorklogId=547304&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-547304
]
ASF GitHub Bot logged work on BEAM-11705:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 03/Feb/21 21:56
Start Date: 03/Feb/21 21:56
Worklog Time Spent: 10m
Work Description: pabloem merged pull request #13827:
URL: https://github.com/apache/beam/pull/13827
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 547304)
Time Spent: 1h 40m (was: 1.5h)
> Write to bigquery always assigns unique insert id per row causing performance
> issue
> -----------------------------------------------------------------------------------
>
> Key: BEAM-11705
> URL: https://issues.apache.org/jira/browse/BEAM-11705
> Project: Beam
> Issue Type: Improvement
> Components: io-py-gcp
> Reporter: Ning Kang
> Assignee: Pablo Estrada
> Priority: P2
> Time Spent: 1h 40m
> Remaining Estimate: 0h
>
> The `ignore_insert_id` argument in BigQuery IO Connector
> https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery.py#L1471
> does not take effect.
> Because the implementation of sending insert rows request always uses an auto
> generated uuid even when the insert_ids is set to None when
> `ignore_insert_id` is True:
> https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery_tools.py#L1062
> The implementation should explicitly set insert_id as None instead of using a
> generated uuid, an example:
> https://github.com/googleapis/python-bigquery/blob/master/samples/table_insert_rows_explicit_none_insert_ids.py#L33
> An unique insert id per row would make the streaming inserts very slow.
> Additionally, the `DEFAULT_SHARDS_PER_DESTINATION` doesn't seem to take any
> effect when `ignore_insert_id` is True in the implementation because it
> skipped the `ReshufflePerKey`
> (https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery.py#L1422).
> When `ignore_insert_id` is True, we seem to lost the batch size control?
--
This message was sent by Atlassian Jira
(v8.3.4#803005)