Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/221#issuecomment-38604656
I think it would be better not to publicly expose the listener bus, since
that's meant to be an internal Spark thing, but I agree that the SparkContext
shouldn't stop before trying to drain the listener bus. I think what we should
do is drain the listener bus ourselves when SparkContext.stop() gets called --
so add waitUntiEmpty(some reasonable timeout) to LiveListenerBus.stop(). Does
this seem right to you two, @andrewor14 and @pwendell ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---