[jira] [Commented] (BEAM-8367) Python BigQuery sink should use unique IDs for mode STREAMING_INSERTS
[ https://issues.apache.org/jira/browse/BEAM-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16958146#comment-16958146 ] Pablo Estrada commented on BEAM-8367: - Yes. Fixed. Thanks! > Python BigQuery sink should use unique IDs for mode STREAMING_INSERTS > - > > Key: BEAM-8367 > URL: https://issues.apache.org/jira/browse/BEAM-8367 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Chamikara Madhusanka Jayalath >Assignee: Pablo Estrada >Priority: Blocker > Fix For: 2.17.0 > > Time Spent: 2h > Remaining Estimate: 0h > > Unique IDs ensure (best effort) that writes to BigQuery are idempotent, for > example, we don't write the same record twice in a VM failure. > > Currently Python BQ sink insert BQ IDs here but they'll be re-generated in a > VM failure resulting in data duplication. > [https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery.py#L766] > > Correct fix is to do a Reshuffle to checkpoint unique IDs once they are > generated, similar to how Java BQ sink operates. > [https://github.com/apache/beam/blob/dcf6ad301069e4d2cfaec5db6b178acb7bb67f49/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/StreamingWriteTables.java#L225] > > Pablo, can you do an initial assessment here ? > I think this is a relatively small fix but I might be wrong. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-8367) Python BigQuery sink should use unique IDs for mode STREAMING_INSERTS
[ https://issues.apache.org/jira/browse/BEAM-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16958139#comment-16958139 ] Kenneth Knowles commented on BEAM-8367: --- The PR has been merged. Is this fixed? > Python BigQuery sink should use unique IDs for mode STREAMING_INSERTS > - > > Key: BEAM-8367 > URL: https://issues.apache.org/jira/browse/BEAM-8367 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Chamikara Madhusanka Jayalath >Assignee: Pablo Estrada >Priority: Blocker > Fix For: 2.17.0 > > Time Spent: 2h > Remaining Estimate: 0h > > Unique IDs ensure (best effort) that writes to BigQuery are idempotent, for > example, we don't write the same record twice in a VM failure. > > Currently Python BQ sink insert BQ IDs here but they'll be re-generated in a > VM failure resulting in data duplication. > [https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery.py#L766] > > Correct fix is to do a Reshuffle to checkpoint unique IDs once they are > generated, similar to how Java BQ sink operates. > [https://github.com/apache/beam/blob/dcf6ad301069e4d2cfaec5db6b178acb7bb67f49/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/StreamingWriteTables.java#L225] > > Pablo, can you do an initial assessment here ? > I think this is a relatively small fix but I might be wrong. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-8367) Python BigQuery sink should use unique IDs for mode STREAMING_INSERTS
[ https://issues.apache.org/jira/browse/BEAM-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16951448#comment-16951448 ] Pablo Estrada commented on BEAM-8367: - https://github.com/apache/beam/pull/9797 out to fix > Python BigQuery sink should use unique IDs for mode STREAMING_INSERTS > - > > Key: BEAM-8367 > URL: https://issues.apache.org/jira/browse/BEAM-8367 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Chamikara Madhusanka Jayalath >Assignee: Pablo Estrada >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Unique IDs ensure (best effort) that writes to BigQuery are idempotent, for > example, we don't write the same record twice in a VM failure. > > Currently Python BQ sink insert BQ IDs here but they'll be re-generated in a > VM failure resulting in data duplication. > [https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery.py#L766] > > Correct fix is to do a Reshuffle to checkpoint unique IDs once they are > generated, similar to how Java BQ sink operates. > [https://github.com/apache/beam/blob/dcf6ad301069e4d2cfaec5db6b178acb7bb67f49/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/StreamingWriteTables.java#L225] > > Pablo, can you do an initial assessment here ? > I think this is a relatively small fix but I might be wrong. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-8367) Python BigQuery sink should use unique IDs for mode STREAMING_INSERTS
[ https://issues.apache.org/jira/browse/BEAM-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16951446#comment-16951446 ] Pablo Estrada commented on BEAM-8367: - Working on fixing this > Python BigQuery sink should use unique IDs for mode STREAMING_INSERTS > - > > Key: BEAM-8367 > URL: https://issues.apache.org/jira/browse/BEAM-8367 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Chamikara Madhusanka Jayalath >Assignee: Pablo Estrada >Priority: Major > > Unique IDs ensure (best effort) that writes to BigQuery are idempotent, for > example, we don't write the same record twice in a VM failure. > > Currently Python BQ sink insert BQ IDs here but they'll be re-generated in a > VM failure resulting in data duplication. > [https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery.py#L766] > > Correct fix is to do a Reshuffle to checkpoint unique IDs once they are > generated, similar to how Java BQ sink operates. > [https://github.com/apache/beam/blob/dcf6ad301069e4d2cfaec5db6b178acb7bb67f49/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/StreamingWriteTables.java#L225] > > Pablo, can you do an initial assessment here ? > I think this is a relatively small fix but I might be wrong. -- This message was sent by Atlassian Jira (v8.3.4#803005)