[ 
https://issues.apache.org/jira/browse/NIFI-12194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17779665#comment-17779665
 ] 

Joe Witt commented on NIFI-12194:
---------------------------------

slack thread: 
https://apachenifi.slack.com/archives/C0L9VCD47/p1698242200810429. 

Guillaume
  6 hours ago
Hello,
I just experienced something bad in NiFi 1.20.0
Let me know if that souds normal to you and if that may have been fixed in 
newer versions.
We're working on switching all our kafka consumers to latest version (2.0 to 
2.6 mainly) and also add SSL authentication to the brokers.
On one processors, I made a mistake :
Used SSL brokers (so with port 9093) but forgot to configure security protocol 
(left @ plaintext) and SSL context service (left @ No value )
What we observed :
The memory usage on all our cluster nodes increased from around 48% to 68% 
percent (and never went down, even after fix)
We reproduced that on a nonprod cluster : the cluster went OOM in less than 1 
minute.
Question :
Is that normal that the connections attempts lead to global outage ? Is it 
possible to fix a max number of attempts for connection ?
Is it normal the memory usage didn't go down after fix (our cluster is 
something like very stable in term of memory usage, for months now)
Has the behaviour changed in latest versions ?

> Nifi fails when ConsumeKafka_2_6 processor is started with PLAINTEXT 
> securityProtocol
> -------------------------------------------------------------------------------------
>
>                 Key: NIFI-12194
>                 URL: https://issues.apache.org/jira/browse/NIFI-12194
>             Project: Apache NiFi
>          Issue Type: Bug
>    Affects Versions: 1.21.0, 1.23.0
>            Reporter: Peter Schmitzer
>            Priority: Major
>         Attachments: image-2023-09-27-15-56-02-438.png
>
>
> When starting ConsumeKafka_2_6 processor with sasl mechanism GSSAPI and the 
> securityProtocol PLAINTEXT (although SSL would be correct) the UI crashed and 
> nifi was no longer accessible. Not only the frontend was not accessible 
> anymore, also the other processors in our flow stopped performing well 
> according to our dashboards.
> We were able to reproduce this by using the config as described above.
> Our nifi in preprod (where this was detected) runs in a kubernetes cluster.
>  * version 1.21.0
>  * 3 nodes
>  * jvmMemory: 1536m
>  * 3G memory (limit)
>  * 400m cpu (request)
>  * zookeeper
> The logs do not offer any unusual entries when the issue is triggered. 
> Inspecting the pod metrics we found a spike in memory.
> The issue is a bit scary for us because a rather innocent config parameter in 
> one single processor is able to let our whole cluster break down.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to