Hi Pierre,

I'm using Nifi version 1.6.0.

04/03/2018 08:16:22 UTC

Tagged nifi-1.6.0-RC3

>From 7c0ee01 on branch NIFI-4995-RC3
FlowFile expiration = 0
Back pressure object threshold = 20000
Back pressure data size threshold = 1GB

The connection is just from the output port of 1 PG to the input port of
another PG.  Inside the PG all the connections are using the same settings
between processors.

Regards,

Jeremy

On Fri, Aug 30, 2019 at 4:14 PM Pierre Villard <[email protected]>
wrote:

> Hi Jeremy,
>
> It seems very weird that you get 200M flow files in a relationship that
> should have backpressure set at 20k flow files. While backpressure is not a
> hard limit you should not get to such numbers. Can you give us more
> details? What version of NiFi are you using? What's the configuration of
> your relationship between your two process groups?
>
> Thanks,
> Pierre
>
> Le ven. 30 août 2019 à 07:46, Jeremy Pemberton-Pigott <
> [email protected]> a écrit :
>
>> Hi,
>>
>> I have a 3 node Nifi 1.6.0 cluster.  It ran out of disk space when there
>> was a log jam of flow files (from slow HBase lookups).  My queue is
>> configured for 20,000 but 1 node has over 206 million flow files stuck in
>> the queue.  I managed to clear up some disk space to get things going again
>> but it seems that after a few mins of processing all the processors in the
>> Log Parser process group will stop processing and show zero in/out.
>>
>> Is this a bug fixed in a later version?
>>
>> Each time I have to tear down the Docker containers running Nifi and
>> restart it to process a few 10,000s and repeat every few mins.  Any idea
>> what I should do to keep it processing the data (nifi-app.log doesn't show
>> my anything unusual about the stop or delay) until the 1 node can clear the
>> backlog?
>>
>> [image: image.png]
>>
>> Regards,
>>
>> Jeremy
>>
>

Reply via email to