ryzhyk commented on issue #7251:
URL: https://github.com/apache/arrow-rs/issues/7251#issuecomment-2735415879

   > [@crepererum](https://github.com/crepererum) rightly pointed out that 
implementing retries (aka 
[#7242](https://github.com/apache/arrow-rs/issues/7242)) would be better than 
splitting into smaller requests to make a timeout as the retry mechanism 
automatically adjusts to current network conditions
   
   Isn't there an upper bound on the timeout (30s by default)? And if the bound 
isn't large enough to push that 200MiB row group through a slow connection, 
won't the request fail anyway? And even if the request succeeds eventually, 
relying on retries to dynamically adjust the timeout seems wasteful compared to 
bounding request size, improving the chances the request will succeed the first 
time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to