HeartSaVioR edited a comment on pull request #35994: URL: https://github.com/apache/spark/pull/35994#issuecomment-1082539764
Probably there is another good point the FS API is the better layer to fix. In Spark's point of view, there is no notion of distinguish whether the failure is retryable or not (we have to deal with NonFatal since we have no context, which IOException may be even too general), whereas the FS client layer it should have more context to decide. Once the client layer decides that the failure is retryable without changing semantic (e.g. the operation is idempotent), the client layer can do the retry and it can be transparent to Spark. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
