mridulm commented on a change in pull request #32287:
URL: https://github.com/apache/spark/pull/32287#discussion_r619982295



##########
File path: 
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
##########
@@ -683,7 +694,28 @@ final class ShuffleBlockFetcherIterator(
             }
           }
 
-        case FailureFetchResult(blockId, mapIndex, address, e) =>
+        // Catching OOM and do something based on it is only a workaround for 
handling the
+        // Netty OOM issue, which is not the best way towards memory 
management. We can
+        // get rid of it when we find a way to manage Netty's memory precisely.
+        case FailureFetchResult(blockId, mapIndex, address, size, 
isNetworkReqDone, e)
+            if e.isInstanceOf[OutOfDirectMemoryError] || 
e.isInstanceOf[NettyOutOfMemoryError] =>
+          assert(address != blockManager.blockManagerId &&
+            !hostLocalBlocks.contains(blockId -> mapIndex),
+            "Netty OOM error should only happen on remote fetch requests")
+          logWarning(s"Failed to fetch block $blockId due to Netty OOM, will 
retry", e)
+          NettyUtils.isNettyOOMOnShuffle = true
+          numBlocksInFlightPerAddress(address) = 
numBlocksInFlightPerAddress(address) - 1
+          bytesInFlight -= size
+          if (isNetworkReqDone) {
+            reqsInFlight -= 1
+            logDebug("Number of requests in flight " + reqsInFlight)
+          }
+          val defReqQueue =
+            deferredFetchRequests.getOrElseUpdate(address, new 
Queue[FetchRequest]())
+          defReqQueue.enqueue(FetchRequest(address, 
Array(FetchBlockInfo(blockId, size, mapIndex))))

Review comment:
       > In a large-scale cluster, I think there should be already plenty of 
loads. So I'd prefer to avoid stage recomputation as possible as we can.
   
   Agree on need to minimize stage recomputation if we can judiciously retry.
   The concern is that repeated fetches can cause issues for all applications 
relying on the shuffle service - not just a single application; and so causes 
an impact across apps.
   
   

##########
File path: 
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
##########
@@ -683,7 +694,28 @@ final class ShuffleBlockFetcherIterator(
             }
           }
 
-        case FailureFetchResult(blockId, mapIndex, address, e) =>
+        // Catching OOM and do something based on it is only a workaround for 
handling the
+        // Netty OOM issue, which is not the best way towards memory 
management. We can
+        // get rid of it when we find a way to manage Netty's memory precisely.
+        case FailureFetchResult(blockId, mapIndex, address, size, 
isNetworkReqDone, e)
+            if e.isInstanceOf[OutOfDirectMemoryError] || 
e.isInstanceOf[NettyOutOfMemoryError] =>
+          assert(address != blockManager.blockManagerId &&
+            !hostLocalBlocks.contains(blockId -> mapIndex),
+            "Netty OOM error should only happen on remote fetch requests")
+          logWarning(s"Failed to fetch block $blockId due to Netty OOM, will 
retry", e)
+          NettyUtils.isNettyOOMOnShuffle = true
+          numBlocksInFlightPerAddress(address) = 
numBlocksInFlightPerAddress(address) - 1
+          bytesInFlight -= size
+          if (isNetworkReqDone) {
+            reqsInFlight -= 1
+            logDebug("Number of requests in flight " + reqsInFlight)
+          }
+          val defReqQueue =
+            deferredFetchRequests.getOrElseUpdate(address, new 
Queue[FetchRequest]())
+          defReqQueue.enqueue(FetchRequest(address, 
Array(FetchBlockInfo(blockId, size, mapIndex))))

Review comment:
       +CC @otterc. Can you take a look at @Ngone51's idea above pls ? Thx




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to