Github user zsxwing commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10076#discussion_r46364753
  
    --- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
    @@ -253,7 +253,7 @@ private[spark] class Executor(
     
             val directResult = new DirectTaskResult(valueBytes, accumUpdates, 
task.metrics.orNull)
             val serializedDirectResult = ser.serialize(directResult)
    -        val resultSize = serializedDirectResult.limit
    +        val resultSize = serializedDirectResult.remaining()
    --- End diff --
    
    > Do we have a situation where the position is not 0 though, but is 
correctly at the start of the data?
    
    If a `ByteBuffer` is from Netty, the position could be a non-zero value.
    
    > Equally, if that's an issue, are we sure the entire buffer has valid 
data, through the end? that assumption is still present here, that the end of 
the data is the end of the buffer.
    
    The `ByteBuffer` may contain more data internally, but the user should only 
read the part between `position` and `limit`. I think that's defined in 
`ByteBuffer/Buffer` javadoc.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to