Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10076#discussion_r46346022
  
    --- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
    @@ -253,7 +253,7 @@ private[spark] class Executor(
     
             val directResult = new DirectTaskResult(valueBytes, accumUpdates, 
task.metrics.orNull)
             val serializedDirectResult = ser.serialize(directResult)
    -        val resultSize = serializedDirectResult.limit
    +        val resultSize = serializedDirectResult.remaining()
    --- End diff --
    
    You're right that there's an implicit assumption in some of this code that 
the buffer's position is 0 on returning, and the entire buffer is filled with 
valid data. Do we have a situation where the position is not 0 though, but is 
correctly at the start of the data? at least, this looks like it handles the 
situation, but it sounds unusual. Equally, if that's an issue, are we sure the 
entire buffer has valid data, through the end? that assumption is still present 
here, that the end of the data is the end of the buffer.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to