Github user kayousterhout commented on the issue:

    https://github.com/apache/spark/pull/13603
  
    Did you consider instead doing this when a task fails (on line 761 in 
TaskSetManager)?  Instead of just checking if the number of failures is greater 
than maxTaskFailures, you could add a second check (if blacklisting is enabled) 
that checks whether the task that just failed could be scheduled anywhere, and 
if it can't be, fail the task set.  This seems simpler to me.
    
    The main drawback I see in that approach is that it could be the case that 
the task failure was caused by an executor failure, and the cluster manager is 
in the process of launching a new executor that the task could run on, so it's 
not correct to fail the task set.  My sense is that it's OK-ish to fail in that 
case, since that seems like it will only happen for jobs that use a super small 
number of executors, in which case random-ish failures are less likely, so the 
failure is more likely to be a real issue with the job.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to