[ 
https://issues.apache.org/jira/browse/NIFI-12194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17781812#comment-17781812
 ] 

ASF subversion and git services commented on NIFI-12194:
--------------------------------------------------------

Commit 9a5a56e79eb26f0c6ccf4d7f6cd9a1fef308c2eb in nifi's branch 
refs/heads/support/nifi-1.x from Paul Grey
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=9a5a56e79e ]

NIFI-12194 Added Yield on Exceptions in Kafka Processors

- Catching KafkaException and yielding for publisher lease requests improves 
behavior when the Processor is unable to connect to Kafka Brokers

This closes #7955

Signed-off-by: David Handermann <exceptionfact...@apache.org>
(cherry picked from commit 75c661bbbe56a7951974a701921af9da74dd0d68)


> Nifi fails when ConsumeKafka_2_6 processor is started with PLAINTEXT 
> securityProtocol
> -------------------------------------------------------------------------------------
>
>                 Key: NIFI-12194
>                 URL: https://issues.apache.org/jira/browse/NIFI-12194
>             Project: Apache NiFi
>          Issue Type: Bug
>    Affects Versions: 1.21.0, 1.23.0
>            Reporter: Peter Schmitzer
>            Assignee: Paul Grey
>            Priority: Major
>         Attachments: image-2023-09-27-15-56-02-438.png
>
>          Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When starting ConsumeKafka_2_6 processor with sasl mechanism GSSAPI and the 
> securityProtocol PLAINTEXT (although SSL would be correct) the UI crashed and 
> nifi was no longer accessible. Not only the frontend was not accessible 
> anymore, also the other processors in our flow stopped performing well 
> according to our dashboards.
> We were able to reproduce this by using the config as described above.
> Our nifi in preprod (where this was detected) runs in a kubernetes cluster.
>  * version 1.21.0
>  * 3 nodes
>  * jvmMemory: 1536m
>  * 3G memory (limit)
>  * 400m cpu (request)
>  * zookeeper
> The logs do not offer any unusual entries when the issue is triggered. 
> Inspecting the pod metrics we found a spike in memory.
> The issue is a bit scary for us because a rather innocent config parameter in 
> one single processor is able to let our whole cluster break down.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to