squito commented on a change in pull request #24817: [SPARK-27963][core] Allow
dynamic allocation without a shuffle service.
URL: https://github.com/apache/spark/pull/24817#discussion_r298263481
##########
File path:
core/src/main/scala/org/apache/spark/scheduler/dynalloc/ExecutorMonitor.scala
##########
@@ -201,6 +332,25 @@ private[spark] class ExecutorMonitor(
}
}
+ override def onOtherEvent(event: SparkListenerEvent): Unit = event match {
+ case ShuffleCleanedEvent(id) => cleanupShuffle(id)
+ case _ =>
+ }
+
+ override def rddCleaned(rddId: Int): Unit = { }
+
+ override def shuffleCleaned(shuffleId: Int): Unit = {
+ // Because this is called in a completely separate thread, we post a
custom event to the
+ // listener bus so that the internal state is safely updated.
+ listenerBus.post(ShuffleCleanedEvent(shuffleId))
Review comment:
yeah, that's the weird part, you get this event with a class that is not
public.
The only things I can think of are (1) to create a new event loop here,
which just forwards the listener events and also this new event or (2) create a
"InternalListenerEvent" which is automatically filtered from all non-internal
listeners. But both of those seem like overkill.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]