liupc commented on a change in pull request #27604:
[SPARK-30849][CORE][SHUFFLE]Fix application failed due to failed to get
MapStatuses broadcast block
URL: https://github.com/apache/spark/pull/27604#discussion_r390111206
##########
File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala
##########
@@ -851,8 +852,16 @@ private[spark] class MapOutputTrackerWorker(conf:
SparkConf) extends MapOutputTr
var fetchedStatuses = mapStatuses.get(shuffleId).orNull
if (fetchedStatuses == null) {
logInfo("Doing the fetch; tracker endpoint = " + trackerEndpoint)
- val fetchedBytes =
askTracker[Array[Byte]](GetMapOutputStatuses(shuffleId))
- fetchedStatuses =
MapOutputTracker.deserializeMapStatuses(fetchedBytes, conf)
+ try {
+ val fetchedBytes =
askTracker[Array[Byte]](GetMapOutputStatuses(shuffleId))
+ fetchedStatuses =
MapOutputTracker.deserializeMapStatuses(fetchedBytes, conf)
+ } catch {
+ case e: IOException if
+
Throwables.getCausalChain(e).asScala.exists(_.isInstanceOf[BlockNotFoundException])
=>
+ mapStatuses.clear()
Review comment:
@cloud-fan Good question! yes, it's ok to clear all the map status, but I
think maybe just drop the data of the current shuffle id is enough. But it
seems that we currently bind an global epoch to the `MapOutputTracker`, if one
stage `FetchFailed`, then the epoch will be updated, so that it will clear all
the map statuses cache in the executor side.
Should we change this behavior? if so may be we can put another PR for that.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]