[
https://issues.apache.org/jira/browse/FLINK-10806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16885712#comment-16885712
]
Jiayi Liao commented on FLINK-10806:
------------------------------------
[~becket_qin] Yes. What I'm trying to say is that we can't specify the offset
to consume on the TopicAndPartition when we discover new TopicAndPartition in
the runtime.
> Support multiple consuming offsets when discovering a new topic
> ---------------------------------------------------------------
>
> Key: FLINK-10806
> URL: https://issues.apache.org/jira/browse/FLINK-10806
> Project: Flink
> Issue Type: Improvement
> Components: Connectors / Kafka
> Affects Versions: 1.6.2, 1.8.1
> Reporter: Jiayi Liao
> Assignee: Jiayi Liao
> Priority: Major
>
> In KafkaConsumerBase, we discover the TopicPartitions and compare them with
> the restoredState. It's reasonable when a topic's partitions scaled. However,
> if we add a new topic which has too much data and restore the Flink program,
> the data of the new topic will be consumed from the start, which may not be
> what we want. I think this should be an option for developers.
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)