mridulm edited a comment on pull request #34098:
URL: https://github.com/apache/spark/pull/34098#issuecomment-955647214


   Thanks for digging more @lxian !
   Apologies for the delay in getting back on this; and to add to the answers 
to my queries.
   
   * Re: `mapOutputTracker.stop()` can throw `SparkException` in case of timeout
     * As @lxian pointed out, this cant happen now after Holden's changes (I 
think I might have been looking at a different branch, sorry for the confusion).
   * Re: `metricsSystem.stop()` could throw exception - depends on the sink.
     * As @lxian detailed, current spark Sink's should not cause this to 
happen. Having said that:
     * Spark supports plugging in custom Sink's : so looking only at what 
exists in our codebase is unfortunately insufficient.
       * An exception here prevents everything else in `SparkEnv.stop` from 
running
     *  To be defensive, handling this would be better - thoughts ?
   
   Both of these below are related to `InterruptedException`:
   * `blockManager.stop()` can throw `InterruptedException`
   * `rpcEnv.awaitTermination` could throw `InterruptedException`
   
   I agree with @lxian, that is not caught by `Utils.tryLogNonFatalError` 
anyway - so let us preserve existing behavior for that.
   
   Given the above, can we address the potential issue with Sink.close ?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to