[GitHub] [spark] cloud-fan commented on a change in pull request #27604: [SPARK-30849][CORE][SHUFFLE]Fix application failed due to failed to get MapStatuses broadcast block

2020-09-25 Thread GitBox


cloud-fan commented on a change in pull request #27604:
URL: https://github.com/apache/spark/pull/27604#discussion_r494738627



##
File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala
##
@@ -851,8 +852,16 @@ private[spark] class MapOutputTrackerWorker(conf: 
SparkConf) extends MapOutputTr
 var fetchedStatuses = mapStatuses.get(shuffleId).orNull
 if (fetchedStatuses == null) {
   logInfo("Doing the fetch; tracker endpoint = " + trackerEndpoint)
-  val fetchedBytes = 
askTracker[Array[Byte]](GetMapOutputStatuses(shuffleId))
-  fetchedStatuses = 
MapOutputTracker.deserializeMapStatuses(fetchedBytes, conf)
+  try {
+val fetchedBytes = 
askTracker[Array[Byte]](GetMapOutputStatuses(shuffleId))
+fetchedStatuses = 
MapOutputTracker.deserializeMapStatuses(fetchedBytes, conf)
+  } catch {
+case e: IOException if
+
Throwables.getCausalChain(e).asScala.exists(_.isInstanceOf[BlockNotFoundException])
 =>
+  mapStatuses.clear()

Review comment:
   But here is the broadcast being invalid issue. I don't think it usually 
happens for a lot of shuffles at the same time.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #27604: [SPARK-30849][CORE][SHUFFLE]Fix application failed due to failed to get MapStatuses broadcast block

2020-09-24 Thread GitBox


cloud-fan commented on a change in pull request #27604:
URL: https://github.com/apache/spark/pull/27604#discussion_r494738627



##
File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala
##
@@ -851,8 +852,16 @@ private[spark] class MapOutputTrackerWorker(conf: 
SparkConf) extends MapOutputTr
 var fetchedStatuses = mapStatuses.get(shuffleId).orNull
 if (fetchedStatuses == null) {
   logInfo("Doing the fetch; tracker endpoint = " + trackerEndpoint)
-  val fetchedBytes = 
askTracker[Array[Byte]](GetMapOutputStatuses(shuffleId))
-  fetchedStatuses = 
MapOutputTracker.deserializeMapStatuses(fetchedBytes, conf)
+  try {
+val fetchedBytes = 
askTracker[Array[Byte]](GetMapOutputStatuses(shuffleId))
+fetchedStatuses = 
MapOutputTracker.deserializeMapStatuses(fetchedBytes, conf)
+  } catch {
+case e: IOException if
+
Throwables.getCausalChain(e).asScala.exists(_.isInstanceOf[BlockNotFoundException])
 =>
+  mapStatuses.clear()

Review comment:
   But here is the broadcast being invalid issue. I don't think it usually 
happens for a lot of shuffles at the same time.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #27604: [SPARK-30849][CORE][SHUFFLE]Fix application failed due to failed to get MapStatuses broadcast block

2020-03-09 Thread GitBox
cloud-fan commented on a change in pull request #27604: 
[SPARK-30849][CORE][SHUFFLE]Fix application failed due to failed to get 
MapStatuses broadcast block
URL: https://github.com/apache/spark/pull/27604#discussion_r389603104
 
 

 ##
 File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala
 ##
 @@ -851,8 +852,16 @@ private[spark] class MapOutputTrackerWorker(conf: 
SparkConf) extends MapOutputTr
 var fetchedStatuses = mapStatuses.get(shuffleId).orNull
 if (fetchedStatuses == null) {
   logInfo("Doing the fetch; tracker endpoint = " + trackerEndpoint)
-  val fetchedBytes = 
askTracker[Array[Byte]](GetMapOutputStatuses(shuffleId))
-  fetchedStatuses = 
MapOutputTracker.deserializeMapStatuses(fetchedBytes, conf)
+  try {
+val fetchedBytes = 
askTracker[Array[Byte]](GetMapOutputStatuses(shuffleId))
+fetchedStatuses = 
MapOutputTracker.deserializeMapStatuses(fetchedBytes, conf)
+  } catch {
+case e: IOException if
+
Throwables.getCausalChain(e).asScala.exists(_.isInstanceOf[BlockNotFoundException])
 =>
+  mapStatuses.clear()
 
 Review comment:
   Is it OK to clear out all the map status? Shouldn't we only drop the data of 
the current shuffle id?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #27604: [SPARK-30849][CORE][SHUFFLE]Fix application failed due to failed to get MapStatuses broadcast block

2020-03-05 Thread GitBox
cloud-fan commented on a change in pull request #27604: 
[SPARK-30849][CORE][SHUFFLE]Fix application failed due to failed to get 
MapStatuses broadcast block
URL: https://github.com/apache/spark/pull/27604#discussion_r388343716
 
 

 ##
 File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala
 ##
 @@ -824,11 +825,15 @@ private[spark] class MapOutputTrackerWorker(conf: 
SparkConf) extends MapOutputTr
   endPartition: Int): Iterator[(BlockManagerId, Seq[(BlockId, Long, 
Int)])] = {
 logDebug(s"Fetching outputs for shuffle $shuffleId, mappers 
$startMapIndex-$endMapIndex" +
   s"partitions $startPartition-$endPartition")
-val statuses = getStatuses(shuffleId, conf)
 try {
+  val statuses = getStatuses(shuffleId, conf)
   MapOutputTracker.convertMapStatuses(
 shuffleId, startPartition, endPartition, statuses, startMapIndex, 
endMapIndex)
 } catch {
+  case e: IOException if
+
Throwables.getCausalChain(e).asScala.exists(_.isInstanceOf[BlockNotFoundException])
 =>
+mapStatuses.clear()
 
 Review comment:
   It looks more consistent to me to throw `MetadataFetchFailedException` in 
`getStatuses`. Then we can reuse the handling of `MetadataFetchFailedException` 
below.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org