Github user ConeyLiu commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19285#discussion_r139917367
  
    --- Diff: 
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
    @@ -233,17 +235,13 @@ private[spark] class MemoryStore(
         }
     
         if (keepUnrolling) {
    -      // We successfully unrolled the entirety of this block
    -      val arrayValues = vector.toArray
    -      vector = null
    -      val entry =
    -        new DeserializedMemoryEntry[T](arrayValues, 
SizeEstimator.estimate(arrayValues), classTag)
    -      val size = entry.size
    +      // get the precise size
    +      val size = estimateSize(true)
    --- End diff --
    
    This is just we unrolled the iterator successfully. But maybe the size of 
underlying vector is greater than `unrollMemoryUsedByThisBlock `, so we need 
request more memory. In this time, there is a possible that we can't request 
enough memory again, so we should call `bbos.toChunkedByteBuffer` or 
`vector.toArray` after requested enough memory.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to