otterc commented on a change in pull request #32287:
URL: https://github.com/apache/spark/pull/32287#discussion_r618789195



##########
File path: 
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
##########
@@ -613,6 +618,12 @@ final class ShuffleBlockFetcherIterator(
           }
           if (isNetworkReqDone) {
             reqsInFlight -= 1
+            if (!buf.isInstanceOf[NettyManagedBuffer]) {
+              // Non-`NettyManagedBuffer` doesn't occupy Netty's memory so we 
can unset the flag
+              // directly once the request succeeds. But for the 
`NettyManagedBuffer`, we'll only
+              // unset the flag when the data is fully consumed (see 
`BufferReleasingInputStream`).
+              NettyUtils.isNettyOOMOnShuffle = false

Review comment:
       This flag is in `NettyUtils` so I am assuming that if there are multiple 
tasks with their own iterators, then they all will be setting/unsetting the 
same flag. Is that correct? 
   Wouldn't this decision to unset the flag here, when it is a 
non-netty_managed_buffer, by a single iterator cause a problem?

##########
File path: 
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
##########
@@ -683,7 +694,28 @@ final class ShuffleBlockFetcherIterator(
             }
           }
 
-        case FailureFetchResult(blockId, mapIndex, address, e) =>
+        // Catching OOM and do something based on it is only a workaround for 
handling the
+        // Netty OOM issue, which is not the best way towards memory 
management. We can
+        // get rid of it when we find a way to manage Netty's memory precisely.
+        case FailureFetchResult(blockId, mapIndex, address, size, 
isNetworkReqDone, e)
+            if e.isInstanceOf[OutOfDirectMemoryError] || 
e.isInstanceOf[NettyOutOfMemoryError] =>
+          assert(address != blockManager.blockManagerId &&
+            !hostLocalBlocks.contains(blockId -> mapIndex),
+            "Netty OOM error should only happen on remote fetch requests")
+          logWarning(s"Failed to fetch block $blockId due to Netty OOM, will 
retry", e)
+          NettyUtils.isNettyOOMOnShuffle = true
+          numBlocksInFlightPerAddress(address) = 
numBlocksInFlightPerAddress(address) - 1
+          bytesInFlight -= size
+          if (isNetworkReqDone) {
+            reqsInFlight -= 1
+            logDebug("Number of requests in flight " + reqsInFlight)
+          }
+          val defReqQueue =
+            deferredFetchRequests.getOrElseUpdate(address, new 
Queue[FetchRequest]())
+          defReqQueue.enqueue(FetchRequest(address, 
Array(FetchBlockInfo(blockId, size, mapIndex))))

Review comment:
       If an executor is not assigned enough off-heap memory. Is it possible in 
this case that it keeps retrying forever? Let's say there is some skew and one 
of the blocks is large and the executor isn't assigned enough memory. Whenever 
this block is fetched Netty OOMs. In this case will this keep retrying? Maybe I 
am missing something.

##########
File path: 
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
##########
@@ -268,7 +269,10 @@ final class ShuffleBlockFetcherIterator(
 
       override def onBlockFetchFailure(blockId: String, e: Throwable): Unit = {
         logError(s"Failed to get block(s) from 
${req.address.host}:${req.address.port}", e)
-        results.put(new FailureFetchResult(BlockId(blockId), 
infoMap(blockId)._2, address, e))
+        remainingBlocks -= blockId

Review comment:
       If error is instance of `OutOfDirectMemoryError` or 
`NettyOutOfMemoryError` should we set `isNettyOOMOnShuffle` set to `true` here? 
The reason is that we would want to stop sending more remote fetch requests as 
soon as possible. Right now it is being set when this particular instance of 
FailureFetchResult would be picked up from the `result` queue and processed.
   I am just thinking it would be better to check the error type here and set 
`NettyUtils.isNettyOOMOnShuffle=true`. 
   
   To add the request to the deferred queue, I think it would better to create 
a new type of FetchResult, let's say, `DeferFetchResult` and add that to 
results. The reason for that is we are change `FailureFetchResult` but it seems 
that this fix is just a workaround. So once we get rid of the workaround, it 
will be much simpler to get rid of `DeferFetchResult` and not modify 
`FailureFetchResult` back. Also an added benefit of this is that other places 
where `FailureFetchResult` is created, that will not change.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to