chia7712 commented on PR #16833: URL: https://github.com/apache/kafka/pull/16833#issuecomment-2291571934
> I believe that was intentional so that we ensure that requests are sent at least once even if timeout expired (which is what the classic consumer does). With the classic consumer, the fetch is performed in a do-while that creates the request and client.poll before checking the timer ([here](https://github.com/apache/kafka/blob/b767c655279083c0a7e958e3f319fe2cead13ddc/clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerCoordinator.java#L934-L960)). So the intention here was the same, requests are stored to be sent (in that partitionBySendability), and then expired. Makes sense? I agree to "sent at least once", and the implementation of `maybeExpire` [0] does check the `numAttempts`. that means the request is sent at least once. The main issue is - `maybeExpire` calls `completeExceptionally` and that will be a potential issue if the request get completed again when we handle the response. [0] https://github.com/apache/kafka/blob/b767c655279083c0a7e958e3f319fe2cead13ddc/clients/src/main/java/org/apache/kafka/clients/consumer/internals/CommitRequestManager.java#L853 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org