lianetm commented on PR #16749:
URL: https://github.com/apache/kafka/pull/16749#issuecomment-2322419699

   Actually, I would expect the behaviour that the documentation currently had:
   
   > If insufficient data is available the request will wait for that much data 
to accumulate
   
   and it would make sense to me if we wanted to extend that with something 
like : `...waiting for up to the`fetch.max.wait.ms` (and we link the 2 configs)
   
   Is that the intention? The current change is actually confusing, seems to 
suggest that it will wait for the "full" fetch.max.wait.ms even if the data 
becomes available?
   
   This is truly a broker-side behaviour I'm not fully familiar with (the 
consumer just passes the config to the broker), but looking at how the delayed 
fetch is handled on the broker it aligns with the expectation that it waits for 
the data to appear, for "at most" the max.wait  
(https://github.com/apache/kafka/blob/70dd577286de31ef20dc4f198e95f9b9e4479b47/core/src/main/scala/kafka/server/DelayedOperation.scala#L34-L36).
    


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to