[ 
https://issues.apache.org/jira/browse/KAFKA-15776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808653#comment-17808653
 ] 

Francois Visconte edited comment on KAFKA-15776 at 1/19/24 4:52 PM:
--------------------------------------------------------------------

[~ckamal] Any idea on how to move forward on that? I think having to configure 
a very high fetch.max.wait defeat the purpose of the KIP of not having to 
proceed adaptations on the consumer side.
This issue is annoying on our test environment (using s3): even with a 
fetch.max.wait of 2s we get flooded with interrupted exception in the logs 
(also probably the sign of sub-optimally cancelling fetch from tiered storage 
to eventually retry and succeed). 


was (Author: JIRAUSER288982):
[~ckamal] Any idea on how to move forward on that? I think having to configure 
a very high fetch.max.wait defeat the purpose of the KIP of not having to 
proceed adaptations on the consumer side.
This issue is annoying on our test environment (using s3) as even with a 
fetch.max.wait of 2s we get flooded with interrupted exception in the logs 
(also probably the sign of sub-optimally cancelling fetch from tiered storage 
to eventually succeed). 

> Update delay timeout for DelayedRemoteFetch request
> ---------------------------------------------------
>
>                 Key: KAFKA-15776
>                 URL: https://issues.apache.org/jira/browse/KAFKA-15776
>             Project: Kafka
>          Issue Type: Task
>            Reporter: Kamal Chandraprakash
>            Assignee: Kamal Chandraprakash
>            Priority: Major
>
> We are reusing the {{fetch.max.wait.ms}} config as a delay timeout for 
> DelayedRemoteFetchPurgatory. {{fetch.max.wait.ms}} purpose is to wait for the 
> given amount of time when there is no data available to serve the FETCH 
> request.
> {code:java}
> The maximum amount of time the server will block before answering the fetch 
> request if there isn't sufficient data to immediately satisfy the requirement 
> given by fetch.min.bytes.
> {code}
> [https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/DelayedRemoteFetch.scala#L41]
> Using the same timeout in the DelayedRemoteFetchPurgatory can confuse the 
> user on how to configure optimal value for each purpose. Moreover, the config 
> is of *LOW* importance and most of the users won't configure it and use the 
> default value of 500 ms.
> Having the delay timeout of 500 ms in DelayedRemoteFetchPurgatory can lead to 
> higher number of expired delayed remote fetch requests when the remote 
> storage have any degradation.
> We should introduce one {{fetch.remote.max.wait.ms}} config (preferably 
> server config) to define the delay timeout for DelayedRemoteFetch requests 
> (or) take it from client similar to {{request.timeout.ms}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to