[ 
https://issues.apache.org/jira/browse/NIFI-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17817098#comment-17817098
 ] 

Denis Jakupovic commented on NIFI-12783:
----------------------------------------

Hi [~pgrey] 

thank you for your response. 
 - Are we talking about PublishKafka_2_6 or PublishKafkaRecord_2_6?
 -- both
 - Is there a network timeout connecting to Kafka?
 -- yes or even a wrong bootstrap address
 - Is there a network timeout sending to Kafka?
 -- kafka is down and NiFi cannot write into kafka
 - Do you have any stack traces from your NiFi app log at the point in time 
where you see a yield to an incoming connection?
 -- No but it is easy to reproduce, just put a wrong bootstrap address and try 
to publish the content of a flowfile. The FF will yield in the incoming 
connection of the Kafka Publish Processor
 - What do you mean about needing dynamically configurable expiration times in 
the incoming queue?
 -- If the processor cannot send the data into kafka, and if on failure the 
data are routed into the failure queue, I can determine for myself what I want 
to do with the data not successfully written into Kafka. Now the data is 
yielding before the processor and I cannot react on the kafka timeout 
 - If a queue could expire FlowFiles after an interval based on FlowFile 
attributes, would that meet your need?
 -- Yes this is my current solution. I expire FF if they are yielded in the 
incoming Kafka queue. 

> Kafka Producer Processors do not route in failure queue on timeout
> ------------------------------------------------------------------
>
>                 Key: NIFI-12783
>                 URL: https://issues.apache.org/jira/browse/NIFI-12783
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Core Framework
>    Affects Versions: 1.23.2
>            Reporter: Denis Jakupovic
>            Priority: Major
>
> Hi,
> the Kafka producer processors do not route the FlowFiles on a timeout e.g. 
> into the failure connection. They are yielded in the incomming connection. 
> You can see the behaviour here e.g.:
> [https://stackoverflow.com/questions/71460008/apache-nifi-publishkafka-timeout-exception]
> I think this is a design flaw. I have a use case where messages should be 
> dropped after a specific configurable time. If the messages are yielded in 
> the incomming queue they are always published when the kafka broker are 
> available again. I know I can set the expiration time in secs or mins in the 
> incomming queue but it is not dynamically configurable because no attributes 
> are allowed. 
> Best
> Denis



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to