[
https://issues.apache.org/jira/browse/FLINK-22825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
徐州州 updated FLINK-22825:
------------------------
Description: (was: |insert into dwd_order_detail
|select
| ord.Id,
| ord.Code,
| Status
| concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id as
STRING)),DATE_FORMAT(LOCALTIMESTAMP,'yyyy-MM-dd')) as uuids,
| TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'yyyy-MM-dd')) as As_Of_Date
|from
|orders ord
|left join order_extend oed on ord.Id=oed.OrderId and oed.IsDeleted=0 and
oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'yyyy-MM-dd') AS TIMESTAMP)
|where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'yyyy-MM-dd') AS
TIMESTAMP)
|or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'yyyy-MM-dd') AS TIMESTAMP)
|or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'yyyy-MM-dd') AS TIMESTAMP)
|) and ord.IsDeleted=0;
My upsert-kafka table for PRIMARY KEY for uuids.
This is the logic of my kafka based canal-json stream data join and write to
Upsert-kafka tables I confirm that version 1.12 also has this problem I just
upgraded from 1.12 to 1.13.
I look up a user s order data and order number XJ0120210531004794 in canal-json
original table as U which is normal.
| +U | XJ0120210531004794 | 50 |
| +U | XJ0120210531004672 | 50 |
But written to upsert-kakfa via join, the data consumed from upsert kafka is,
| +I | XJ0120210531004794 | 50 |
| -U | XJ0120210531004794 | 50 |
The order is two records this sheet in orders and order_extend tables has not
changed since created -U status caused my data loss not computed and the final
result was wrong.)
Summary: flink sql1.13.1 (was: flink sql1.13.1 causes data loss based
on change log stream data join)
> flink sql1.13.1
> ---------------
>
> Key: FLINK-22825
> URL: https://issues.apache.org/jira/browse/FLINK-22825
> Project: Flink
> Issue Type: Bug
> Components: Table SQL / API
> Affects Versions: 1.12.0, 1.13.1
> Reporter: 徐州州
> Priority: Blocker
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)