[ 
https://issues.apache.org/jira/browse/NIFI-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17817161#comment-17817161
 ] 

Denis Jakupovic commented on NIFI-12783:
----------------------------------------

"Because other users expect the existing behavior of the PublishKafka 
processor, we would want to make any changes in a way that would preserve this. 
Connectivity failures should, by default, yield as they do currently. But a 
reasonable specialization might involve a new property that specified this 
alternate routing, perhaps along with a new relationship that could optionally 
receive relevant FlowFiles on encountering this situation."

I think that most people do not expect this behavior. I think NiFi per Design 
should route any failure into the failure relationship or really yield if the 
user specifies this in the properties of the processor. 

In most cases people reroute the failure queue into the processor again and 
penalize the FF accordingly. This is a standard approach. But if the yield in 
the incoming queue there are no possibilities to react e.g. route into another 
processor e.g. do not store into kafka put store into s3. 

Yes a separate relationship/queue for connectivity issues would be great. Only 
a relationship lets me decide who to react on connectivity issues.

Thank you [~pgrey]  

> Kafka Producer Processors do not route in failure queue on timeout
> ------------------------------------------------------------------
>
>                 Key: NIFI-12783
>                 URL: https://issues.apache.org/jira/browse/NIFI-12783
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Core Framework
>    Affects Versions: 1.23.2
>            Reporter: Denis Jakupovic
>            Priority: Major
>
> Hi,
> the Kafka producer processors do not route the FlowFiles on a timeout e.g. 
> into the failure connection. They are yielded in the incomming connection. 
> You can see the behaviour here e.g.:
> [https://stackoverflow.com/questions/71460008/apache-nifi-publishkafka-timeout-exception]
> I think this is a design flaw. I have a use case where messages should be 
> dropped after a specific configurable time. If the messages are yielded in 
> the incomming queue they are always published when the kafka broker are 
> available again. I know I can set the expiration time in secs or mins in the 
> incomming queue but it is not dynamically configurable because no attributes 
> are allowed. 
> Best
> Denis



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to