Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/204#discussion_r11456895
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1115,6 +1114,20 @@ class SparkContext(
/** Register a new RDD, returning its RDD ID */
private[spark] def newRddId(): Int = nextRddId.getAndIncrement()
+ /** Post the application start event */
+ private def postApplicationStart() {
+ listenerBus.post(SparkListenerApplicationStart(appName, startTime,
sparkUser))
+ }
+
+ /**
+ * Post the application end event to all listeners immediately, rather
than adding it
+ * to the event queue for it to be asynchronously processed eventually.
Otherwise, a race
+ * condition exists in which the listeners may stop before this event
has been propagated.
+ */
+ private def postApplicationEnd() {
--- End diff --
This is different from the Shutdown event. The StageCompletion event will
still be processed in the regular listener bus thread. It is true that the
ApplicationEnd event may be processed before the StageCompletion event,
however. A proper way of dealing with this is in #366, where the ordering is
preserved but all events are guaranteed to be processed to completion.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---