[ https://issues.apache.org/jira/browse/KAFKA-15615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17946415#comment-17946415 ]
appchemist edited comment on KAFKA-15615 at 4/22/25 1:01 PM: ------------------------------------------------------------- Is this patch no longer necessary? If no longer necessary, I'll close the PR. Else, it's been a while and I'll have to check the code again. was (Author: appchemist): Is this patch no longer necessary? If no longer necessary, I'll close the PR. If no, it's been a while and I'll have to check the code again. > Improve handling of fetching during metadata updates > ---------------------------------------------------- > > Key: KAFKA-15615 > URL: https://issues.apache.org/jira/browse/KAFKA-15615 > Project: Kafka > Issue Type: Improvement > Components: clients, consumer > Affects Versions: 3.8.0 > Reporter: Kirk True > Assignee: appchemist > Priority: Major > Labels: consumer-threading-refactor, fetcher > > [During a review of the new > fetcher|https://github.com/apache/kafka/pull/14406#discussion_r1333393941], > [~junrao] found what appears to be an opportunity for optimization. > When a fetch response receives an error about partition leadership, fencing, > etc. a metadata refresh is triggered. However, it takes time for that refresh > to occur, and in the interim, it appears that the consumer will blindly > attempt to fetch data for the partition again, in kind of a "definition of > insanity" type of way. Ideally, the consumer would have a way to temporarily > ignore those partitions, in a way somewhat like the "pausing" approach so > that they are skipped until the metadata refresh response is fully processed. > This affects both the existing KafkaConsumer and the new > PrototypeAsyncConsumer. -- This message was sent by Atlassian Jira (v8.20.10#820010)