tgravescs commented on pull request #27943: URL: https://github.com/apache/spark/pull/27943#issuecomment-636870809
thanks for the feedback. In some ways that is what I would expect, it is to fail faster. You should be able to adjust spark.shuffle.io.maxRetries and spark.shuffle.io.retryWait. I think its hard to balance people that want it to fail fast vs ones who want to run no matter what as well as nodes just GCing vs really have issues. Are you using blacklisting? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org