[ 
https://issues.apache.org/jira/browse/FLINK-17355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17336221#comment-17336221
 ] 

Flink Jira Bot commented on FLINK-17355:
----------------------------------------

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Exactly once kafka checkpointing sensitive to single node failures
> ------------------------------------------------------------------
>
>                 Key: FLINK-17355
>                 URL: https://issues.apache.org/jira/browse/FLINK-17355
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Kafka
>    Affects Versions: 1.10.0
>            Reporter: Teng Fei Liao
>            Priority: Major
>              Labels: stale-major
>
> With exactly one semantics, when checkpointing, FlinkKafkaProducer creates a 
> new KafkaProducer for each checkpoint. KafkaProducer#initTransactions can 
> timeout if a kafka node becomes unavailable, even in the case of multiple 
> brokers and in-sync replicas (see 
> [https://stackoverflow.com/questions/55955379/enabling-exactly-once-causes-streams-shutdown-due-to-timeout-while-initializing]).
> In non-flink cases, this might be fine since I imagine a KafkaProducer is not 
> created very often. With flink however, this is called per checkpoint which 
> means practically an HA kafka cluster isn't actually HA. This makes rolling a 
> kafka node particularly painful even in intentional cases such as config 
> changes or upgrades.
>  
> In our specific setup, these are our settings:
> 5 kafka nodes
> Per topic, we have a replication factor = 3 and in-sync replicas = 2 and 
> partitions = 3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to