lucasbru commented on code in PR #15525: URL: https://github.com/apache/kafka/pull/15525#discussion_r1538901757
########## clients/src/main/java/org/apache/kafka/clients/consumer/internals/AsyncKafkaConsumer.java: ########## @@ -1141,21 +1141,27 @@ private Map<TopicPartition, Long> beginningOrEndOffset(Collection<TopicPartition if (partitions.isEmpty()) { return Collections.emptyMap(); } + Map<TopicPartition, Long> timestampToSearch = partitions .stream() .collect(Collectors.toMap(Function.identity(), tp -> timestamp)); Timer timer = time.timer(timeout); ListOffsetsEvent listOffsetsEvent = new ListOffsetsEvent( timestampToSearch, - false, timer); - Map<TopicPartition, OffsetAndTimestamp> offsetAndTimestampMap = applicationEventHandler.addAndGet( + + // shortcut the request if the timeout is zero. + if (timeout.isZero()) { Review Comment: I've tried to dig into this a bit. So short-cutting is definitely the right thing to do, since otherwise we'll run into a time-out exception. The old consumer is a bit weird in that it fires the list offset request, but never returns a result. But I also couldn't find a cache that that is influenced by the list offsets request, so what's the point of sending the request? Replicating the old consumer behavior would mean creating the event for the background thread, but not waiting for the result. We can consider changing the behavior here, but let's make sure we do it consciously. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org