Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1165#discussion_r17645275
  
    --- Diff: core/src/main/scala/org/apache/spark/CacheManager.scala ---
    @@ -118,21 +118,29 @@ private[spark] class CacheManager(blockManager: 
BlockManager) extends Logging {
       }
     
       /**
    -   * Cache the values of a partition, keeping track of any updates in the 
storage statuses
    -   * of other blocks along the way.
    +   * Cache the values of a partition, keeping track of any updates in the 
storage statuses of
    +   * other blocks along the way.
    +   *
    +   * The effective storage level refers to the level that actually 
specifies BlockManager put
    +   * behavior, not the level originally specified by the user. This is 
mainly for forcing a
    +   * MEMORY_AND_DISK partition to disk if there is not enough room to 
unroll the partition,
    +   * while preserving the the original semantics of the RDD as specified 
by the application.
        */
       private def putInBlockManager[T](
    --- End diff --
    
    Ah, got it. So it's a follow of the origin implementation:
        
        val elements = new ArrayBuffer[Any]
        elements ++= computedValues
        ...
        return elements.iterator.asInstanceOf[Iterator[T]]
    Then we can ensure the returned iterator will always have data for user.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to