otterc commented on a change in pull request #32287:
URL: https://github.com/apache/spark/pull/32287#discussion_r618737234
##########
File path:
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
##########
@@ -268,7 +269,10 @@ final class ShuffleBlockFetcherIterator(
override def onBlockFetchFailure(blockId: String, e: Throwable): Unit = {
logError(s"Failed to get block(s) from
${req.address.host}:${req.address.port}", e)
- results.put(new FailureFetchResult(BlockId(blockId),
infoMap(blockId)._2, address, e))
+ remainingBlocks -= blockId
Review comment:
If error is instance of `OutOfDirectMemoryError` or
`NettyOutOfMemoryError` should we set `isNettyOOMOnShuffle` set to `true` here?
The reason is that we would want to stop sending more remote fetch requests as
soon as possible. Right now it is being set when this particular instance of
FailureFetchResult would be picked up from the `result` queue and processed.
I am just thinking it would be better to check the error type here and set
`NettyUtils.isNettyOOMOnShuffle=true`.
To add the request to the deferred queue, I think it would better to create
a new type of FetchResult, let's say, `DeferFetchResult` and add that to
results. The reason for that is we are changing `FailureFetchResult` but it
seems that this fix is just a workaround. So once we get rid of the workaround,
it will be much simpler to get rid of `DeferFetchResult` and not modify
`FailureFetchResult` back. Also an added benefit of this is that other places
where `FailureFetchResult` is created, that will not change.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]