[
https://issues.apache.org/jira/browse/BEAM-11705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ning Kang updated BEAM-11705:
-----------------------------
Description:
The `ignore_insert_id` argument in BigQuery IO Connector
https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery.py#L1471
does not take effect.
Because the implementation of sending insert rows request always uses an auto
generated uuid even when the insert_ids is set to None when `ignore_insert_id`
is True:
https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery_tools.py#L1062
The implementation should explicitly set insert_id as None instead of using a
generated uuid, an example:
https://github.com/googleapis/python-bigquery/blob/master/samples/table_insert_rows_explicit_none_insert_ids.py#L33
An unique insert id per row would make the streaming inserts very slow.
Additionally, the `DEFAULT_SHARDS_PER_DESTINATION` doesn't seem to take any
effect when `ignore_insert_id` is True in the implementation because it skipped
the `ReshufflePerKey`. When `ignore_insert_id` is True, we seem to lost the
batch size control?
was:
The `ignore_insert_id` argument in BigQuery IO Connector
https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery.py#L1471
does not take effect.
Because the implementation of sending insert rows request always uses an auto
generated uuid even when the insert_ids is set to None when `ignore_insert_id`
is True:
https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery_tools.py#L1062
The implementation should explicitly set insert_id as None instead of using a
generated uuid, an example:
https://github.com/googleapis/python-bigquery/blob/master/samples/table_insert_rows_explicit_none_insert_ids.py#L33
An unique insert id per row would make the streaming inserts very slow.
> Write to bigquery always assigns unique insert id per row causing performance
> issue
> -----------------------------------------------------------------------------------
>
> Key: BEAM-11705
> URL: https://issues.apache.org/jira/browse/BEAM-11705
> Project: Beam
> Issue Type: Improvement
> Components: io-py-gcp
> Reporter: Ning Kang
> Assignee: Pablo Estrada
> Priority: P2
>
> The `ignore_insert_id` argument in BigQuery IO Connector
> https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery.py#L1471
> does not take effect.
> Because the implementation of sending insert rows request always uses an auto
> generated uuid even when the insert_ids is set to None when
> `ignore_insert_id` is True:
> https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery_tools.py#L1062
> The implementation should explicitly set insert_id as None instead of using a
> generated uuid, an example:
> https://github.com/googleapis/python-bigquery/blob/master/samples/table_insert_rows_explicit_none_insert_ids.py#L33
> An unique insert id per row would make the streaming inserts very slow.
> Additionally, the `DEFAULT_SHARDS_PER_DESTINATION` doesn't seem to take any
> effect when `ignore_insert_id` is True in the implementation because it
> skipped the `ReshufflePerKey`. When `ignore_insert_id` is True, we seem to
> lost the batch size control?
--
This message was sent by Atlassian Jira
(v8.3.4#803005)