chia7712 commented on code in PR #17353:
URL: https://github.com/apache/kafka/pull/17353#discussion_r1792720641


##########
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AsyncKafkaConsumer.java:
##########
@@ -1072,12 +1073,10 @@ public Map<TopicPartition, OffsetAndTimestamp> 
offsetsForTimes(Map<TopicPartitio
             }
 
             try {
-                return applicationEventHandler.addAndGet(listOffsetsEvent)
-                    .entrySet()
-                    .stream()
-                    .collect(Collectors.toMap(
-                        Map.Entry::getKey,
-                        entry -> entry.getValue().buildOffsetAndTimestamp()));
+                Map<TopicPartition, OffsetAndTimestampInternal> offsets = 
applicationEventHandler.addAndGet(listOffsetsEvent);
+                Map<TopicPartition, OffsetAndTimestamp> results = new 
HashMap<>(offsets.size());
+                offsets.forEach((k, v) -> results.put(k, v != null ? 
v.buildOffsetAndTimestamp() : null));

Review Comment:
   > The idea was to mimic that in the AsyncKafkaConsumer implementation.
   
   I understand the intention to mimic the existing behavior, but do we really 
need to mimic a bad design? Especially since newer Java collection operations, 
like lambdas and `Map.copyOf`, don't accept `null` values. In my opinion, 
`AsyncKafkaConsumer` could provide better behavior, reducing the risk of `NPEs` 
for users, even if it introduces a small behavior change. We can document this 
change in the relevant methods.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to