Github user Ngone51 commented on the issue:
https://github.com/apache/spark/pull/20987
Things I'm concerned about is that does there exists a situation like 'a
task gets killed after it gets a FetchFailure, but re-run (not by resubmit)
later and gets a FetchFailure too without TaskKilledException' (or this fix
against speculative tasks only).
Of course, we can handle the FetchFailure during the re-run process. But,
if we can handle the FetchFailure earlier when we get those two Exceptions in
order in the same task run process, it would be better.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]