Github user andrewor14 commented on the pull request:

    https://github.com/apache/spark/pull/221#issuecomment-38621306
  
    I'm still not too sure about doing this through a timeout. It just seems to 
me that people would only want to set this timeout to either 0 or infinity 
seconds. Setting it to anything in between is like trying to guess how long 
your listeners will finish processing an unknown number of events, which is 
quite hard. And if you guess it wrong, then the penalty is that you have to 
re-run your application and guess again.
    
    To ensure other SparkContext state gets cleaned up, we can move 
`listenerBus.stop()` to the end of `sc.stop()` in case there are undying 
listeners that keep the event bus thread from stopping.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to