Github user vanzin commented on the issue:

    https://github.com/apache/spark/pull/21558
  
    > If you have one stage running that gets a fetch failure, if it leaves any 
tasks running
    
    I took a look at the output coordinator code and, depending on how the 
scheduler behaves, it might be ok.
    
    The coordinator will deny commits for finished stages; so it depends on the 
order of things. If the failed attempt is marked as "failed" before the next 
attempt starts, then it's ok, even if tasks for the failed attempt are still 
running. Looking at the code handling `FetchFailed` failures in `DAGScheduler`, 
that seems to be the case.



---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to