Github user steveloughran commented on the issue:

    https://github.com/apache/spark/pull/20490
  
    @rdblue thanks. That was what I thought (the output coordinator doesn't 
tell incoming speculative work to abort until any actively committing task 
attempt has returned, I was just worried after the conversation.
    
    In a lot of the Hadoop FS code, InterruptedException is converted to 
`InterruptedIOException` to allow it to trickle up, but as other IOEs subclass 
that (socket/connect timeouts), you can't assume that `InterruptedIOException` 
implied job was interrupted, only that some IO went wrong. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to