update 怎么触发的 delete 哦?
在 2020-09-14 11:37:07,"LittleFall" <1578166...@qq.com> 写道:
>Flink 版本:
>flink:1.11.1-scala_2.12
>连接器
>mysql-connector-java-8.0.21
>flink-sql-connector-kafka_2.12-1.11.1
>flink-connector-jdbc_2.12-1.11.1
>
>Flink SQL:
>
>CREATE TABLE source_user_name (
>    loan_no int,
>    name varchar,
>    PRIMARY KEY (loan_no) NOT ENFORCED
>) WITH (
>    'connector' = 'kafka',       
>    'topic' = 'test.username',
>    'properties.bootstrap.servers' = 'kafka:9092',
>    'properties.group.id' = 'test_flink_name_group',
>    'format'='canal-json',
>    'scan.startup.mode' = 'group-offsets'
>);
>
>CREATE TABLE test_flink_name_sink (
>    loan_no int,
>    name varchar,
>    PRIMARY KEY (loan_no) NOT ENFORCED
>) WITH (
>    'connector.type' = 'jdbc',
>    'connector.url' =
>'jdbc:mysql://host.docker.internal:3306/test?&rewriteBatchedStatements=true',
>    'connector.table' = 'username',
>    'connector.driver' = 'com.mysql.cj.jdbc.Driver',
>    'connector.username' = 'root',
>    'connector.password' = '',
>    'connector.write.flush.max-rows' = '5000',
>    'connector.write.flush.interval' = '1s'
>);
>
>insert into test_flink_name_sink (loan_no,name)
>select loan_no,name from source_user_name;
>
>
>外部 sql:
>
>CREATE TABLE username (
>    loan_no int PRIMARY KEY,
>    name varchar(10)
>);
>
>insert into username values (1,'a');
>
>架构是 mysql-canal-kafka-flink-mysql
>
>同时执行(一次输入两行)
>
>UPDATE `username` SET `name` = 'b' WHERE `loan_no` = 1;
>UPDATE `username` SET `name` = 'a' WHERE `loan_no` = 1;
>
>发现目标数据库中结果丢失,结果稳定复现。
>
>分析原因:
>
>```
>上游一个update下游会落地两个sql
>1.insert into after value
>2.delete before value
>而且insert和delete是分开两个statement batch提交的。先insert batch再delete batch
>
>如果上游同时有两个update,update逻辑id:1,name:a更新为id:1,name:b再更新为id:1,name:a
>这个时候就会触发问题
>insert batch结束之后数据变成了id:1,name:a
>再执行delete batch的第一条before delete:delete id=1 name=a会直接把最终正确的数据删除了
>第二条的before delete:delete id=1 name=b删除不到数据。因为数据已经被删除了
>```
>
>换成新版 JDBC 配置之后没有这个问题。
>
>请问这是已经发现的问题吗?有没有 issue 号
>
>
>
>--
>Sent from: http://apache-flink.147419.n8.nabble.com/

回复