cloud-fan commented on code in PR #52606:
URL: https://github.com/apache/spark/pull/52606#discussion_r2450881364


##########
core/src/main/scala/org/apache/spark/storage/BlockManagerStorageEndpoint.scala:
##########
@@ -57,7 +57,14 @@ class BlockManagerStorageEndpoint(
 
     case RemoveShuffle(shuffleId) =>
       doAsync[Boolean](log"removing shuffle ${MDC(SHUFFLE_ID, shuffleId)}", 
context) {
-        if (mapOutputTracker != null) {
+        if (mapOutputTracker != null && 
!mapOutputTracker.isInstanceOf[MapOutputTrackerMaster]) {
+          // SPARK-53898: `MapOutputTrackerMaster.unregisterShuffle()` should 
only be called
+          // through `ContextCleaner` when the shuffle is considered no longer 
referenced anywhere.
+          // Otherwise, we might hit exceptions if there is any subsequent 
access (which still
+          // reference that shuffle) to that shuffle metadata in 
`MapOutputTrackerMaster`. E.g.,
+          // an ongoing subquery could access the same shuffle metadata which 
could have been
+          // cleaned up after the main query completes. Note this currently 
only happens in local
+          // cluster where both driver and executor use the 
`MapOutputTrackerMaster`.
           mapOutputTracker.unregisterShuffle(shuffleId)

Review Comment:
   One idea: shall we add a new method `clearShuffleStatusCache` and call it 
here? The executor side shuffle status is more like a cache and the driver side 
one is single source of truth. `MapOutputTrackerMaster#clearShuffleStatusCache` 
is noop.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to