[ 
https://issues.apache.org/jira/browse/FLINK-34883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34883:
-----------------------------------
    Labels: github-import pull-request-available  (was: github-import)

> Error on Postgres-CDC using incremental snapshot with UUID column as PK
> -----------------------------------------------------------------------
>
>                 Key: FLINK-34883
>                 URL: https://issues.apache.org/jira/browse/FLINK-34883
>             Project: Flink
>          Issue Type: Bug
>          Components: Flink CDC
>            Reporter: Flink CDC Issue Import
>            Priority: Major
>              Labels: github-import, pull-request-available
>
> A majority of our Postgres databases use UUIDs as primary keys.
> When we enable 'scan.incremental.snapshot.enabled = true', Flink-CDC will try 
> to split into chunks.
> The splitTableIntoChunks function relies on the queryMinMax function, which 
> fails when trying to calculate the MIN(UUID) and MAX(UUID), as that is not 
> supported in Postgres.
> Is there a way around this?
> When we convert our column to VARCHAR, rather than UUID, everything seems to 
> work.
> We did not find a way to cast our UUIDs to VARCHAR while splitting them into 
> chunks without editing the source code or altering the source table.
> Disabling incremental snapshots also fixes the issue, as we do not split into 
> chunks anymore, but this would mean we get a global read lock on the data 
> before snapshot reading, which we want to avoid.
> Thanks in advance for the help!
> ---------------- Imported from GitHub ----------------
> Url: https://github.com/apache/flink-cdc/issues/3108
> Created by: [olivier-derom|https://github.com/olivier-derom]
> Labels: 
> Created at: Wed Mar 06 16:55:03 CST 2024
> State: open



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to