[
https://issues.apache.org/jira/browse/NIFI-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17307844#comment-17307844
]
ASF subversion and git services commented on NIFI-8357:
-------------------------------------------------------
Commit 74ea3840ac98c8deff1ab83f673cc8fcb7072bcd in nifi's branch
refs/heads/main from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=74ea384 ]
NIFI-8357: Updated Kafka 2.6 processors to automatically handle recreating
Consumer Lease objects when an existing one is poisoned, even if using
statically assigned partitions
This closes #4926.
Signed-off-by: Peter Turcsanyi <[email protected]>
> ConsumeKafka(Record)_2_0, ConsumeKafka(Record)_2_6 do not reconnect if using
> statically assigned partitions
> -----------------------------------------------------------------------------------------------------------
>
> Key: NIFI-8357
> URL: https://issues.apache.org/jira/browse/NIFI-8357
> Project: Apache NiFi
> Issue Type: Bug
> Components: Extensions
> Reporter: Mark Payne
> Assignee: Mark Payne
> Priority: Critical
> Fix For: 1.14.0
>
> Time Spent: 10m
> Remaining Estimate: 0h
>
> If using statically assigned partitions in ConsumeKafka_2_0,
> ConsumeKafkaRecord_2_0, ConsumeKafka_2_6, or ConsumeKafkaRecord_2_6 (via
> adding {{partitions.}}{{<hostname}}> properties), when a client connection
> fails, it recreates connections but does not properly assign the partitions.
> As a result, the consumer stops consuming data from its partition(s), and the
> Kafka client that gets created gets leaked. This can slowly build up to
> leaking many of these connections and potentially could exhaust heap or cause
> IOException: too many open files.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)