Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/4869#issuecomment-77035463
  
    There aren't great alternatives here because the root problem is that we 
have a bunch of global shared state, so it's kind of hard to avoid 
synchronization here without doing a huge refactoring.  Therefore, this looks 
good to me.
    
    I think a short hang during `SparkContext.stop()` is pretty unlikely to 
happen in practice; if it does turn out to be a problem during testing, then we 
can revisit this and try to consider more involved approaches to safely 
interrupting active cleanup tasks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to