kszlim commented on code in PR #5383:
URL: https://github.com/apache/arrow-rs/pull/5383#discussion_r1485430537
##########
object_store/src/client/retry.rs:
##########
@@ -263,7 +262,7 @@ impl RetryExt for reqwest::RequestBuilder {
do_retry = true
} else if let Some(source) = e.source() {
if let Some(e) =
source.downcast_ref::<hyper::Error>() {
- if e.is_connect() || e.is_closed() ||
e.is_incomplete_message() {
+ if !(e.is_parse() || e.is_parse_status() ||
e.is_parse_too_large() || e.is_user() || e.is_canceled()) {
Review Comment:
You mean like it was previously? The problem I'm encountering is that there
are no conditions exposed in hyper for
`https://docs.rs/hyper/latest/src/hyper/error.rs.html#478`.
I totally agree with you that this code is much more aggressive, though
there seems to be no other way.
I think what can possibly justify this code is that we can't retry
indefinitely since the retry limit has to be set by the caller of this code? I
agree it's not ideal, though it seems like it's going to be quite a long time
away before hyper will expose any more fine grained information about errors.
I'm happy to hear any other suggestions though.
Does it make sense having two tiers of errors, one with a lower retry limit
for unknown sorts of errors that get caught by the conditions I inserted, and
one with a higher limit (which was what the original code caught)? It's
definitely uglier for consumers though.
Otherwise it might make sense having polars implement `RetryExt` and then
perhaps gating this `RetryExt` implementation behind an optional feature?
@ritchie46 curious what your thoughts are, I have this PR open to resolve
https://github.com/pola-rs/polars/issues/14384 which I almost invariably
encounter when querying a large enough dataset in s3 (from an ec2 instance).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]