dajac commented on code in PR #15186:
URL: https://github.com/apache/kafka/pull/15186#discussion_r1461535326
##########
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractFetch.java:
##########
@@ -376,25 +376,21 @@ protected Map<Node, FetchSessionHandler.FetchRequestData>
prepareCloseFetchSessi
final Cluster cluster = metadata.fetch();
Map<Node, FetchSessionHandler.Builder> fetchable = new HashMap<>();
- try {
- sessionHandlers.forEach((fetchTargetNodeId, sessionHandler) -> {
- // set the session handler to notify close. This will set the
next metadata request to send close message.
- sessionHandler.notifyClose();
+ sessionHandlers.forEach((fetchTargetNodeId, sessionHandler) -> {
Review Comment:
> There are some potential alternatives to solve this problem, but the path
I took was to simply revert to the previous behavior which did not clear the
cache.
I think that the cache was actually cleared in `Fetcher.close()` before the
refactoring. I suppose that we could bring it back too even if it does not
bring much in the end.
##########
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractFetch.java:
##########
@@ -376,25 +376,21 @@ protected Map<Node, FetchSessionHandler.FetchRequestData>
prepareCloseFetchSessi
final Cluster cluster = metadata.fetch();
Map<Node, FetchSessionHandler.Builder> fetchable = new HashMap<>();
- try {
- sessionHandlers.forEach((fetchTargetNodeId, sessionHandler) -> {
- // set the session handler to notify close. This will set the
next metadata request to send close message.
- sessionHandler.notifyClose();
+ sessionHandlers.forEach((fetchTargetNodeId, sessionHandler) -> {
+ // set the session handler to notify close. This will set the next
metadata request to send close message.
+ sessionHandler.notifyClose();
- // FetchTargetNode may not be available as it may have
disconnected the connection. In such cases, we will
- // skip sending the close request.
- final Node fetchTarget = cluster.nodeById(fetchTargetNodeId);
+ // FetchTargetNode may not be available as it may have
disconnected the connection. In such cases, we will
+ // skip sending the close request.
+ final Node fetchTarget = cluster.nodeById(fetchTargetNodeId);
- if (fetchTarget == null || isUnavailable(fetchTarget)) {
- log.debug("Skip sending close session request to broker {}
since it is not reachable", fetchTarget);
- return;
- }
+ if (fetchTarget == null || isUnavailable(fetchTarget)) {
+ log.debug("Skip sending close session request to broker {}
since it is not reachable", fetchTarget);
+ return;
+ }
- fetchable.put(fetchTarget, sessionHandler.newBuilder());
- });
- } finally {
- sessionHandlers.clear();
Review Comment:
For the reference, `sessionHandlers.clear()` is indeed incorrect here. The
issue is that the caller of `prepareCloseFetchSessionRequests` calls
`client.poll()` to complete the requests created here. When they do complete,
the sessions are not there anymore so the warning is logged.
##########
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractFetch.java:
##########
@@ -376,25 +376,21 @@ protected Map<Node, FetchSessionHandler.FetchRequestData>
prepareCloseFetchSessi
final Cluster cluster = metadata.fetch();
Review Comment:
On a slightly different topic, should `prepareCloseFetchSessionRequests` be
synchronized too? We use to have a synchronize in the `close` method but it was
removed during the refactoring.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]