Github user steveloughran commented on the issue:
    @rdblue thanks. That was what I thought (the output coordinator doesn't 
tell incoming speculative work to abort until any actively committing task 
attempt has returned, I was just worried after the conversation.
    In a lot of the Hadoop FS code, InterruptedException is converted to 
`InterruptedIOException` to allow it to trickle up, but as other IOEs subclass 
that (socket/connect timeouts), you can't assume that `InterruptedIOException` 
implied job was interrupted, only that some IO went wrong. 


To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to