Github user squito commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22705#discussion_r225208317
  
    --- Diff: 
core/src/main/scala/org/apache/spark/util/io/ChunkedByteBuffer.scala ---
    @@ -195,7 +196,11 @@ object ChunkedByteBuffer {
         val is = new FileInputStream(file)
         ByteStreams.skipFully(is, offset)
         val in = new LimitedInputStream(is, length)
    -    val chunkSize = math.min(maxChunkSize, length).toInt
    +    // Though in theory you should be able to index into an array of size 
Int.MaxValue, in practice
    +    // jvms don't let you go up to limit.  It seems you may only need - 2, 
but we leave a little
    +    // extra room.
    +    val maxArraySize = Int.MaxValue - 512
    --- End diff --
    
    great suggestion, thanks, just updated


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to