[ 
https://issues.apache.org/jira/browse/FLINK-19099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17187592#comment-17187592
 ] 

Danny Chen commented on FLINK-19099:
------------------------------------

Before Flink-15221, the SQL Kafka connector only supports "at least once" 
semantic, where the records may duplicate when there are failures. You can use 
data stream instead.

> consumer kafka message repeat
> -----------------------------
>
>                 Key: FLINK-19099
>                 URL: https://issues.apache.org/jira/browse/FLINK-19099
>             Project: Flink
>          Issue Type: Bug
>          Components: API / DataStream, Connectors / Kafka
>    Affects Versions: 1.11.0
>            Reporter: zouwenlong
>            Priority: Major
>
> when taksmanager be killed ,my job consume some message , but the offset in 
> not commit,
> then restart it ,my job consume kafka message repeat,  I used checkpoint  and 
> set 5 seconds ,
> I think this is a very common problem,how to solve this problem?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to