Hi everyone In light of the discussions about order guarantee in Kafka, I am struggling to understand how that affects KafkaStreams internal *KafkaProducer*. In the official documentation, this section ( https://docs.confluent.io/current/streams/concepts.html#out-of-order-handling) enumerates 2 causes "that could potentially result in out-of-order data *arrivals* with respect to their timestamps". But I haven't found anything that mentioned how KafkaStreams *producers* will handle errors, and how that could lead to out-of-order messages being produced in output topics. When I start my KafkaStreams application, I've seen the internal producers use the below in its default configuration: enable.idempotence = false max.in.flight.requests.per.connection = 5 retries = 2147483647
So I guess that this could mean that at the end of my topology, KafkaStreams could potentially send out of order messages to an output topic if for some reason the message fails to be delivered to the broker, as the internal producer would retry that. I've read that to guarantee order in the producers, one needs to set "max.in.flight.requests.per.connection=1". But I wonder if one should override this configuration for KafkaStreams applications? Thanks Murilo