squito commented on a change in pull request #23453: [SPARK-26089][CORE] Handle
corruption in large shuffle blocks
URL: https://github.com/apache/spark/pull/23453#discussion_r264274741
##########
File path:
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
##########
@@ -466,16 +469,19 @@ final class ShuffleBlockFetcherIterator(
var isStreamCopied: Boolean = false
try {
input = streamWrapper(blockId, in)
- // Only copy the stream if it's wrapped by compression or
encryption, also the size of
- // block is small (the decompressed block is smaller than
maxBytesInFlight)
- if (detectCorrupt && !input.eq(in) && size < maxBytesInFlight / 3)
{
+ // Only copy the stream if it's wrapped by compression or
encryption upto a size of
+ // maxBytesInFlight/3. If stream is longer, then corruption will
be caught while reading
+ // the stream.
+ streamCompressedOrEncrypted = !input.eq(in)
+ if (streamCompressedOrEncrypted && detectCorruptUseExtraMemory) {
isStreamCopied = true
- val out = new ChunkedByteBufferOutputStream(64 * 1024,
ByteBuffer.allocate)
- // Decompress the whole block at once to detect any corruption,
which could increase
- // the memory usage tne potential increase the chance of OOM.
+ // Decompress the block upto maxBytesInFlight/3 at once to
detect any corruption which
+ // could increase the memory usage and potentially increase the
chance of OOM.
// TODO: manage the memory used here, and spill it into disk in
case of OOM.
- Utils.copyStream(input, out, closeStreams = true)
- input = out.toChunkedByteBuffer.toInputStream(dispose = true)
+ val (fullyCopied: Boolean, mergedStream: InputStream) =
Utils.copyStreamUpTo(
+ input, maxBytesInFlight / 3)
Review comment:
I'm trying to understand why the
```scala
finally {
// TODO: release the buf here to free memory earlier
if (isStreamCopied) {
in.close()
}
```
is needed down below. To be honest, I don't think it was needed in the old
code. The old `Utils.copyStream` was always called with `closeStreams=true`,
and that would always close the input in a `finally` itself:
https://github.com/apache/spark/blob/d9978fb4e4d4de3a320b012373c18bd278462780/core/src/main/scala/org/apache/spark/util/Utils.scala#L302-L307
https://github.com/apache/spark/blob/d9978fb4e4d4de3a320b012373c18bd278462780/core/src/main/scala/org/apache/spark/util/Utils.scala#L330-L332
It doesn't hurt, but it also makes things unnecessarily confusing. If you
didn't need to do that `in.close()` below, you woudln't need to track
`isStreamCopied`, and wouldn't need to even return `fullyCopied` from
`Utils.copyStreamUpTo`. That's really the part of this which is bugging me --
something seems off that we need to know whether or not the stream is fully
copied, seems like it shouldn't matter. If it does matter, aren't we getting
something wrong in the case where the stream is *exactly* maxBytesInFlight / 3,
but we haven't realized its fully copied because we haven't read past the end
yet?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]