I am trying Nifi to evaluate and it got out of space very soon.  I am reading
approx 150 MB of data from S3 (using ListS3 and FetchS3) every 20 mins and
posting it to http endpoint. Trying to fix this I already got into hung
state.

1. I emptied the content repository but that got Nifi in hung state, I
cannot empty queues and neither it moves ahead. Gives following error:

0 FlowFiles (0 bytes) were removed from the queue.
Failed to drop FlowFiles due to java.io.IOException: All Partitions have
been blacklisted due to failures when attempting to update

How can I get out of it? I basically want nifi to start fresh and forget the
state of previously read data?

2.What are the possible optimisations I could do? 

Thank you!



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/Nifi-in-a-hung-state-tp14713.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.

Reply via email to