[ 
https://issues.apache.org/jira/browse/FLINK-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16882759#comment-16882759
 ] 

Jiangjie Qin commented on FLINK-11792:
--------------------------------------

[~knaufk] This is a little surprising. By design KafkaConsumer should handle 
leader transitions itself by retrying, which should be transparent to the 
users. What was the exception you saw? Was it in some old Kafka version, e.g. 
0.8?

> Make KafkaConsumer more resilient to Kafka Broker Failures 
> -----------------------------------------------------------
>
>                 Key: FLINK-11792
>                 URL: https://issues.apache.org/jira/browse/FLINK-11792
>             Project: Flink
>          Issue Type: Improvement
>          Components: Connectors / Kafka
>    Affects Versions: 1.7.2
>            Reporter: Konstantin Knauf
>            Priority: Major
>
> When consuming from a topic with replication factor > 1, the 
> FlinkKafkaConsumer could continue reading from this topic, when a single 
> broker fails, by "simply" switching to the new leader `s for all lost 
> partitions after Kafka failover. Currently, the KafkaConsumer will most 
> likely throw in exception as topic metadata is only periodically fetched from 
> the Kafka cluster.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

Reply via email to