vanzin commented on a change in pull request #24817: [WIP][SPARK-27963][core] 
Allow dynamic allocation without a shuffle service.
URL: https://github.com/apache/spark/pull/24817#discussion_r296930110
 
 

 ##########
 File path: 
core/src/main/scala/org/apache/spark/scheduler/dynalloc/ExecutorMonitor.scala
 ##########
 @@ -201,6 +332,25 @@ private[spark] class ExecutorMonitor(
     }
   }
 
+  override def onOtherEvent(event: SparkListenerEvent): Unit = event match {
+    case ShuffleCleanedEvent(id) => cleanupShuffle(id)
+    case _ =>
+  }
+
+  override def rddCleaned(rddId: Int): Unit = { }
+
+  override def shuffleCleaned(shuffleId: Int): Unit = {
+    // Because this is called in a completely separate thread, we post a 
custom event to the
+    // listener bus so that the internal state is safely updated.
+    listenerBus.post(ShuffleCleanedEvent(shuffleId))
 
 Review comment:
   Well, all listeners already have to deal with events they don't understand; 
that's the contract for `onOtherEvent`, since we're allowed to add new events 
(public or not).
   
   The only odd thing here is that this event is not public, so you can't 
really handle it outside of Spark (without reflection), but I don't really see 
an easy way to solve this issue differently (outside of adding locking which I 
don't want to).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to