Hello,

Couple of questions, what version of NiFi are you using and what is the
maximum amount of data (approximate size) you are putting into the
attributes of a FlowFile? Specifically, the error looks a lot like this
one[1]. It only comes about if you're pulling more than 64kb into the
attributes of a FlowFile. It slowly messes up the FlowFile repo partition
by partition.

As for starting fresh, just stop NiFi and remove the repo directories
(flowfile_repo, content_repo, provenance_repo). This will remove all data
from the flow but keep your configuration.

For your second question, it's hard to suggest optimizations without
knowing more about your flow and set-up.

[1] https://issues.apache.org/jira/browse/NIFI-3389

Joe

On Mon, Feb 13, 2017 at 8:02 AM, sam <[email protected]> wrote:

> I am trying Nifi to evaluate and it got out of space very soon.  I am
> reading
> approx 150 MB of data from S3 (using ListS3 and FetchS3) every 20 mins and
> posting it to http endpoint. Trying to fix this I already got into hung
> state.
>
> 1. I emptied the content repository but that got Nifi in hung state, I
> cannot empty queues and neither it moves ahead. Gives following error:
>
> 0 FlowFiles (0 bytes) were removed from the queue.
> Failed to drop FlowFiles due to java.io.IOException: All Partitions have
> been blacklisted due to failures when attempting to update
>
> How can I get out of it? I basically want nifi to start fresh and forget
> the
> state of previously read data?
>
> 2.What are the possible optimisations I could do?
>
> Thank you!
>
>
>
> --
> View this message in context: http://apache-nifi-developer-
> list.39713.n7.nabble.com/Nifi-in-a-hung-state-tp14713.html
> Sent from the Apache NiFi Developer List mailing list archive at
> Nabble.com.
>



-- 
*Joe Percivall*
linkedin.com/in/Percivall
e: [email protected]

Reply via email to