Github user vanzin commented on the issue:

    https://github.com/apache/spark/pull/12258
  
    @kayousterhout I don't think the tasks will be left hanging. This is a 
state where the executor metadata still exists but the scheduler is still 
tracking its tasks, so that it can fail them with the appropriate reason. So 
when the executor is finally marked as failed, this will cause 
`TaskSetManager.executorLost` to be called, which will fail this task.
    
    Your second point is valid; we could use those values to avoid a 
recomputation of the task, since we have a valid result. Don't feel strongly 
about it, though, since this case should be very rare.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to