Ngone51 commented on a change in pull request #33451:
URL: https://github.com/apache/spark/pull/33451#discussion_r677449980
##########
File path:
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
##########
@@ -822,8 +836,15 @@ final class ShuffleBlockFetcherIterator(
}
} catch {
case e: IOException =>
- buf.release()
+ // When shuffle checksum is enabled, for a block that is
corrupted twice,
+ // we'd calculate the checksum of the block by consuming the
remaining data
+ // in the buf. So, we should release the buf later.
+ if (!(checksumEnabled && corruptedBlocks.contains(blockId))) {
+ buf.release()
+ }
Review comment:
> To clarify, with this change, for all fetches after the first failure,
we will diagnose (except if cause == disk) ?
No. We will only diagnose if a block is corrupted at second time (those
blocks which can be found in corruptedBlocks). But there's an exception for the
corruption from `BufferReleasingInputStream`. In the case of
`BufferReleasingInputStream`, the data stream has been partially consumed by
the downstream RDDs. So, we don't have a chance to retry. In this case, we'd
diagnose anyway.
(Previously, we'd diagnose when the block corrupted at first time. And we
decide whether retry the block depends on the diagnosis result. But this way
has a problem that it blocks the fetcher's thread so may introduce regression.
Now, for the block corrupted at first time, we still always retry it (this
remains the same behavior as now). If it corrupts again, then we'd diagnose it
and throw fetch failure with the cause (if any).)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]