attilapiros commented on a change in pull request #30763:
URL: https://github.com/apache/spark/pull/30763#discussion_r564633862
##########
File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala
##########
@@ -858,8 +883,8 @@ private[spark] class MapOutputTrackerWorker(conf:
SparkConf) extends MapOutputTr
/** Unregister shuffle data. */
- def unregisterShuffle(shuffleId: Int): Unit = {
- mapStatuses.remove(shuffleId)
+ def unregisterShuffle(shuffleId: Int, blocking: Boolean): Boolean = {
Review comment:
No, it isn't used in this implementation.
But this param is still needed in the interface to override the abstract
method coming from the `MapOutputTracker` class:
https://github.com/apache/spark/blob/a09ced3589d90c533744a04f0c3b976d4bb42352/core/src/main/scala/org/apache/spark/MapOutputTracker.scala#L391
And in the abstract `MapOutputTracker` method is introduced to reuse this
method in the context cleaning.
https://github.com/attilapiros/spark/blob/a09ced3589d90c533744a04f0c3b976d4bb42352/core/src/main/scala/org/apache/spark/ContextCleaner.scala#L223
Where we could trigger a blocking or a non-blocking cleanup depending on the
config: `spark.cleaner.referenceTracking.blocking.shuffle`.
So I think this is fine here.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]