No confusion, Nick. I hear you. In our case...
We attempt to reroute flowfile back through processors that, for
whatever reason, might bring them to a state in which they can be
successful (much to explain there, but...) and others we route to a
similar, "deadletter" store where their contents are examined by hand
and changes made (to the flow and processors used, to original document
contents, etc.), then readmitted to the flow later.
(Note: We have many custom processors doing very special things.)
I'm personally all ears on this thread--eager to hear what others will
say. Thanks for hosting it!
Russ
On 03/01/2017 01:38 PM, Nick Carenza wrote:
Sorry for the confusion, I meant to put emphasis on the _you_, as in
'you all' or other users of nifi. I am looking to get insight into
solutions other have implemented to deal with failures.
- Nick
On Wed, Mar 1, 2017 at 12:29 PM, Oleg Zhurakousky
<[email protected] <mailto:[email protected]>>
wrote:
Nick
Since you’ve already designed Process Group (PG) that is specific
to failed flow files, I am not sure I understand your last
question “. . .How do you manage failure relationships?. . .”.
I am assuming that within your global flow all failure
relationships are sent to this PG, which essentially is a Dead
Letter Storage.
Are you asking about how do you get more information from the
failed Flow Files (i.e., failure location, reason etc)?
Cheers
Oleg
On Mar 1, 2017, at 3:21 PM, Nick Carenza
<[email protected]
<mailto:[email protected]>> wrote:
I have a lot of processors in my flow, all of which can, and do,
route flowfiles to their failure relationships at some point.
In the first iteration of my flow, I routed every failure
relationship to an inactive DebugFlow but monitoring these was
difficult, I wouldn't get notifications when something started to
fail and if the queue got filled up it would apply backpressure
and prevent new, good flowfiles from being processed.
Not only was that just not a good way to handle failures, but my
flow was littered with all of these do-nothing processors and was
an eye sore. So then I tried routing processor failure
relationships into themselves which tidied up my flow but caused
nifi to go berserk when a failure occurred because the failure
relationship is not penalized (nor should it be) and most
processors don't provide a 'Retry' relationship (InvokeHttp being
a notable exception). But really, most processors wouldn't
conceivable succeed if they were tried again. I mostly just
wanted the flowfiles to sit there until I had a chance to check
out why they failed and fix them manually.
This leads me to https://issues.apache.org/jira/browse/NIFI-3351
<https://issues.apache.org/jira/browse/NIFI-3351>. I think I need
a way to store failed flowfiles, fix them and reprocess them. The
process group I am currently considering implementing everywhere is:
Input Port [Failed Flowfile] --> PutS3 deadletter/<failure
location>/<failure reason>/${uuid} --> PutSlack
ListS3 deadletter/<failure location>/<failure reason>/ -->
FetchS3 -> Output Port [Fixed]
This gives me storage of failed messages logically grouped and in
a place that wont block up my flow since s3 never goes down,
err... wait. Configurable process groups or template like
https://issues.apache.org/jira/browse/NIFI-1096
<https://issues.apache.org/jira/browse/NIFI-1096> would make this
easier to reuse.
How do you manage failure relationships?
- Nick