[ https://issues.apache.org/jira/browse/FLINK-10806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883520#comment-16883520 ]
Jiangjie Qin commented on FLINK-10806: -------------------------------------- Would setting \{{auto.offset.reset}} for the KafkaConsumer meet your requirements? See [https://kafka.apache.org/documentation/] (search for "auto.offset.reset") > Support multiple consuming offsets when discovering a new topic > --------------------------------------------------------------- > > Key: FLINK-10806 > URL: https://issues.apache.org/jira/browse/FLINK-10806 > Project: Flink > Issue Type: Improvement > Components: Connectors / Kafka > Affects Versions: 1.6.2 > Reporter: Jiayi Liao > Assignee: Jiayi Liao > Priority: Major > > In KafkaConsumerBase, we discover the TopicPartitions and compare them with > the restoredState. It's reasonable when a topic's partitions scaled. However, > if we add a new topic which has too much data and restore the Flink program, > the data of the new topic will be consumed from the start, which may not be > what we want. I think this should be an option for developers. -- This message was sent by Atlassian JIRA (v7.6.14#76016)