warrenzhu25 commented on code in PR #37603:
URL: https://github.com/apache/spark/pull/37603#discussion_r964303436


##########
core/src/main/scala/org/apache/spark/storage/BlockManagerDecommissioner.scala:
##########
@@ -125,21 +126,25 @@ private[storage] class BlockManagerDecommissioner(
                   logDebug(s"Migrated sub-block $blockId")
                 }
               }
+              numMigratedShuffles.incrementAndGet()
               logInfo(s"Migrated $shuffleBlockInfo to $peer")
             } catch {
-              case e: IOException =>
+              case e @ ( _ : IOException | _ : SparkException) =>
                 // If a block got deleted before netty opened the file handle, 
then trying to
                 // load the blocks now will fail. This is most likely to occur 
if we start
                 // migrating blocks and then the shuffle TTL cleaner kicks in. 
However this
                 // could also happen with manually managed shuffles or a GC 
event on the
                 // driver a no longer referenced RDD with shuffle files.
                 if 
(bm.migratableResolver.getMigrationBlocks(shuffleBlockInfo).size < blocks.size) 
{
                   logWarning(s"Skipping block $shuffleBlockInfo, block 
deleted.")
+                  numDeletedShuffles.incrementAndGet()

Review Comment:
   Thanks for the suggestion. Updated



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to