ukby1234 commented on code in PR #42296:
URL: https://github.com/apache/spark/pull/42296#discussion_r1292045322


##########
core/src/main/scala/org/apache/spark/MapOutputTracker.scala:
##########
@@ -1288,6 +1288,30 @@ private[spark] class MapOutputTrackerWorker(conf: 
SparkConf) extends MapOutputTr
     mapSizesByExecutorId.iter
   }
 
+  def getMapOutputLocationWithRefresh(
+      shuffleId: Int,
+      mapId: Long,
+      prevLocation: BlockManagerId): BlockManagerId = {
+    // Try to get the cached location first in case other concurrent tasks
+    // fetched the fresh location already
+    var currentLocationOpt = getMapOutputLocation(shuffleId, mapId)
+    if (currentLocationOpt.isDefined && currentLocationOpt.get == 
prevLocation) {
+      // Address in the cache unchanged. Try to clean cache and get a fresh 
location
+      unregisterShuffle(shuffleId)
+      currentLocationOpt = getMapOutputLocation(shuffleId, mapId)
+    }
+    if (currentLocationOpt.isEmpty) {
+      throw new MetadataFetchFailedException(shuffleId, -1,
+        message = s"Failed to get map output location for shuffleId 
$shuffleId, mapId $mapId")
+    }
+    currentLocationOpt.get

Review Comment:
   When shuffle fallback storage is enabled, this `currentLocationOpt`can be 
the `FALLBACK_BLOCK_MANAGER_ID`, and `DeferFetchRequestResult` below doesn't 
handle this special case. 
   so either 1) check the FetchRequest for fallback storage special ID 
2)rewrite the RPC address to localhost so we get the blocks inside the fallback 
storage. 
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to