[ 
https://issues.apache.org/jira/browse/KAFKA-13683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503218#comment-17503218
 ] 

Matthias J. Sax commented on KAFKA-13683:
-----------------------------------------

{quote}We are using kakfa streams 3.0.
{quote}
Should the ticket component field should be set to `kafkaStreams` instead of 
`clients?


For Kafka Streams this should actually be fixed in 2.8.0 via 
https://issues.apache.org/jira/browse/KAFKA-8803 / 
https://issues.apache.org/jira/browse/KAFKA-9274

Also, how do set `bootstrap.server` config? Is it hard codes IPs, or serverl 
url? For server urls, what is you DNS caching setting inside the JVM? Cf 
[https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-jvm-ttl.html]
 

> Transactional Producer - Transaction with key xyz went wrong with exception: 
> Timeout expired after 60000milliseconds while awaiting InitProducerId
> --------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-13683
>                 URL: https://issues.apache.org/jira/browse/KAFKA-13683
>             Project: Kafka
>          Issue Type: Bug
>          Components: clients
>    Affects Versions: 2.6.0, 2.7.0, 3.0.0
>            Reporter: Michael Hornung
>            Priority: Critical
>              Labels: new-txn-protocol-should-fix
>         Attachments: AkkaHttpRestServer.scala, 
> image-2022-02-24-09-12-04-804.png, image-2022-02-24-09-13-01-383.png, 
> timeoutException.png
>
>
> We have an urgent issue with our customer using kafka transactional producer 
> with kafka cluster with 3 or more nodes. Our customer is using confluent 
> cloud on azure.
> We this exception regularly: "Transaction with key XYZ went wrong with 
> exception: Timeout expired after 60000milliseconds while awaiting 
> InitProducerId" (see attachment)
> We assume that the cause is a node which is down and the producer still sends 
> messages to the “down node”. 
> We are using kafa streams 3.0.
> *We expect that if a node is down kafka producer is intelligent enough to not 
> send messages to this node any more.*
> *What’s the solution of this issue? Is there any config we have to set?*
> *This request is urgent because our costumer will soon have production 
> issues.*
> *Additional information*
>  * send record --> see attachment “AkkaHttpRestServer.scala” – line 100
>  * producer config --> see attachment “AkkaHttpRestServer.scala” – line 126



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to