lianetm commented on PR #16686: URL: https://github.com/apache/kafka/pull/16686#issuecomment-2306995526
I would say we're in a bit of uncharted territory with this decision of skipping the interrupt (with timeout or 0), just because the classic consumer does things differently and simply doesn't have this same situation. Looking at the current behaviour with the goal of getting close to it, the classic simply makes progress on interrupt+close until it polls for a first time (to send the leave request). It's at that point that it short-circuits and propagates the interrupted ([ConsumerNetworkClient.maybeThrowInterruptException](https://github.com/apache/kafka/blob/ced34e3176463759991d1ba322f1c793a243d068/clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java#L298)). That means that interrupt+close is just given a chance to send the request, but not to sit and wait for responses. To get to that behaviour with the new consumer, close(0) seems more accurate, and seems to tackle my main concern: how do we avoid a situation where an interrupted thread will block on the consumer.close just because we internally ignore the interruption? (note that if we give time to consumer.close, it will wait for responses for previous requests also, not only the leave request). That being said, I'm still also debating on this one, but just raising the concern. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org