[ 
https://issues.apache.org/jira/browse/NIFI-13615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-13615.
-----------------------------------
    Resolution: Feedback Received

Apache NiFi 1.x is no longer maintained and no new release is planned on the 
1.x release line. Marking as resolved as part of a cleanup operation. Please 
open a new one with an updated description if this is still relevant for NiFi 
2.x.

> Compressed Queue Sporadic Memory Leak and Connection Breakdown
> --------------------------------------------------------------
>
>                 Key: NIFI-13615
>                 URL: https://issues.apache.org/jira/browse/NIFI-13615
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Core Framework
>    Affects Versions: 1.24.0
>         Environment: Kubernetes
>            Reporter: Marios Tsolekas
>            Priority: Minor
>         Attachments: nifi-logs.txt
>
>
> Hello,
> We have a 3 node Nifi 1.24.0 cluster running on K8s, secured with TLS. 
> Everything operates as expected, except on certain load balanced queues, 
> where a lot of errors pop up regarding unexpected data-frame indicators and 
> broken pipes. Attached you can find a sample of these errors. These errors 
> only pop-up so far in the queues between 3 specific processors, 
> ConvertRecord->MergeContent->PutMongoRecord. These errors cause a significant 
> slowdown of the processing and disappear together with the slowdown when 
> compression is disabled on these queues.
> Initially this caused the JVM heap to become full because CommunicateAction 
> objects were not freed from the heap, but after backporting the patch in 
> https://issues.apache.org/jira/browse/NIFI-12532 these object get freed. 
> Because the errors kept persisting I figured this could be caused  by the 
> mixing of various InputStreams with GZIPInputStream and their mismatched 
> .available() methods in StandardLoadBalanceProtocol.java, RecordReaders.java 
> and CompressableRecordReader.java. Alas even after I wrapped the streams in a 
> custom stream that would behave as GZIPInputStream's .available() method 
> expected, the issue remained. So far I haven't identified any egregious 
> memory leaks due to this issue (other than the one I patched above), but the 
> significant slowdown of the processing remains and there seem to be more than 
> the usual leftovers in the content repository. What could be causing this, 
> granted that the overwhelming majority of queues don't display this behavior?
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to