philipnee commented on code in PR #15186:
URL: https://github.com/apache/kafka/pull/15186#discussion_r1453840738


##########
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractFetch.java:
##########
@@ -376,25 +376,21 @@ protected Map<Node, FetchSessionHandler.FetchRequestData> 
prepareCloseFetchSessi
         final Cluster cluster = metadata.fetch();
         Map<Node, FetchSessionHandler.Builder> fetchable = new HashMap<>();
 
-        try {
-            sessionHandlers.forEach((fetchTargetNodeId, sessionHandler) -> {
-                // set the session handler to notify close. This will set the 
next metadata request to send close message.
-                sessionHandler.notifyClose();
+        sessionHandlers.forEach((fetchTargetNodeId, sessionHandler) -> {

Review Comment:
   Thanks Kirk, for the explanation - It seems like there are cases where we 
want to clear the cache - one I can think of is when there's a topology change 
but this is probably an unnoticeable optimization - i guess the size of the 
handler lookup never grows so large that becomes a problem.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to