lianetm commented on code in PR #18702:
URL: https://github.com/apache/kafka/pull/18702#discussion_r1939721414


##########
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AsyncKafkaConsumer.java:
##########
@@ -1326,7 +1331,7 @@ private void close(Duration timeout, boolean 
swallowException) {
         // We are already closing with a timeout, don't allow wake-ups from 
here on.
         wakeupTrigger.disableWakeups();
 
-        final Timer closeTimer = time.timer(timeout);
+        final Timer closeTimer = createTimer(timeout);

Review Comment:
   not introduced here but affected with this change. I notice that 
`runRebalanceCallbacksOnClose` consumes time from the close timeout (receives 
the timer just to update it). But that behaviour is not the same in the Classic 
Consumer. 
   
   In the classic, the close timeout only applies to requests really. The 
callbacks run when closing the Abstract coordinator, without time boundaries, 
and most importantly, without  consuming time from the close timeout. We 
`runRebalanceCallbacksOnClose` without time boundaries too, but it does consume 
the time from the timeout param, right? Wouldn't that potentially leave less 
time for the following requests? I'm concerned about existing apps, running 
callbacks, calling close with a timeout that used to be "enough", but now it 
may not be. Should we simply remove the timer from the 
runRebalanceCallbacksOnClose? 



##########
clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java:
##########
@@ -153,7 +153,8 @@ public class CommonClientConfigs {
     public static final String REQUEST_TIMEOUT_MS_DOC = "The configuration 
controls the maximum amount of time the client will wait "
                                                          + "for the response 
of a request. If the response is not received before the timeout "
                                                          + "elapses the client 
will resend the request if necessary or fail the request if "
-                                                         + "retries are 
exhausted.";
+                                                         + "retries are 
exhausted. This timeout also applies to the consumer close operation - "
+                                                         + "even if a larger 
timeout is specified for close, it will be limited by this value.";

Review Comment:
   well just for our understanding and related to my other comment: the classic 
behaviour we're trying to keep is that the request timeout applies to 
operations performed with the coordinator and the leader (coordinator-related 
requests and fetch sessions), not to other close operations that do not perform 
any request. I'm not suggesting to add any of that here, would polute this 
config doc imo. Actually, since it's really specific to consumer.close, isn't 
it enough to explain the behaviour on the close API java doc? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to