Github user andrewor14 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1165#discussion_r17617153
  
    --- Diff: core/src/main/scala/org/apache/spark/CacheManager.scala ---
    @@ -118,21 +118,29 @@ private[spark] class CacheManager(blockManager: 
BlockManager) extends Logging {
       }
     
       /**
    -   * Cache the values of a partition, keeping track of any updates in the 
storage statuses
    -   * of other blocks along the way.
    +   * Cache the values of a partition, keeping track of any updates in the 
storage statuses of
    +   * other blocks along the way.
    +   *
    +   * The effective storage level refers to the level that actually 
specifies BlockManager put
    +   * behavior, not the level originally specified by the user. This is 
mainly for forcing a
    +   * MEMORY_AND_DISK partition to disk if there is not enough room to 
unroll the partition,
    +   * while preserving the the original semantics of the RDD as specified 
by the application.
        */
       private def putInBlockManager[T](
    --- End diff --
    
    Ah, this part is actually really tricky. If you just used `putIterator` 
here, the result would be incorrect and the reason behind that is quite subtle. 
Here in `getOrCompute` we need to return the actual iterator in addition to 
storing it in `BlockManager`, so if we just use `putIterator` with 
`MEMORY_ONLY` level, then other threads might drop our block before we get to 
read it back, in which case we will have nothing to return because our original 
iterator was already exhausted.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to