I indeed saw that the `linger.ms` default was changed for producers (as said I didn't update the client libs yet). But I was missing the fact that also the server side was now configured using a linger value to write data to disk.
Thanks for the post nevertheless. :-) Kind regards Georg Friedrich *From:* Brebner, Paul via users <[email protected]> *Sent:* Tuesday, February 10, 2026 at 23:43 *To:* [email protected] <[email protected]> *Cc:* Brebner, Paul <[email protected]> *Subject:* RE: Reduced message consumption on Kafka 4.1 compared to 3.9 > Yes, this has caught a few people out, see my blog: > https://www.linkedin.com/pulse/increased-lingering-apache-kafka-40-can-increase-latency-paul-brebner-z7qlc/?trackingId=wDlmi2cTSxmAw8KDTv2Tew%3D%3D > > Regards, Paul > > From: Radu Radutiu <[email protected]> > Date: Wednesday, 11 February 2026 at 12:23 am > To: [email protected] <[email protected]> > Subject: Re: Reduced message consumption on Kafka 4.1 compared to 3.9 > > [You don't often get email from [email protected]. Learn why this is > important at https://aka.ms/LearnAboutSenderIdentification ] > > EXTERNAL EMAIL - USE CAUTION when clicking links or attachments > > > > > There was a previous message on the list regarding increased latency on > upgrade to Kafka 4.0. I believe the cause was: > "The batching interval defaults to 5 ms and is controlled by the ` > group.coordinator.append.linger.ms` broker config option.". Try lowering > this and see if it helps. > > Radu > > On Tue, Feb 10, 2026 at 2:17 PM Georg Friedrich <[email protected]> wrote: > >> Hey everyone, >> >> I have a question in regards to the message consumption rate on Kafka 4.1.1 >> Recently I've updated a Kafka cluster from version 3.9.1 to 4.1.1 >> (just the cluster was updated - the client libraries are still on >> version 3.9.1). >> This cluster was already using KRaft, so the update was very much >> straight forward by just updating the version. >> After the upgrade though I've observed that the message rate by a >> (previously already very busy) Kafka consumer nearly halved. >> >> I have mitigated the problem for now by putting more parallel >> consumption on it, but I'm wondering whether this is an expected thing >> or whether there was any change to Kafka that I'm unaware of. >> What should be known here is that each of my consumed messages is >> doing an in-sync manual offset commit. I know that this is not an >> ideal scenario to have a large throughput (e.g. using async commits or >> batched commits even). >> All I want to raise here is that the situation got worse compared to >> the version 3.9 and I'm wondering why that is. >> >> Additionally to note here is, that the offset replication factor is >> set to 3 and the min in-sync replicas is set to 2. >> Not sure if this is somehow connected and previously the offset commit >> already returned when one replica returned an ack and this now has >> changed (just guessing here). >> >> In any case I'm happy to hear your thoughts on this and thanks so much >> in advance. >> >> Kind regards >> Georg Friedrich >>
