Github user pwendell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1165#discussion_r15214812
  
    --- Diff: core/src/main/scala/org/apache/spark/CacheManager.scala ---
    @@ -124,15 +124,20 @@ private[spark] class CacheManager(blockManager: 
BlockManager) extends Logging {
       private def putInBlockManager[T](
           key: BlockId,
           values: Iterator[T],
    -      storageLevel: StorageLevel,
    -      updatedBlocks: ArrayBuffer[(BlockId, BlockStatus)]): Iterator[T] = {
    -
    -    if (!storageLevel.useMemory) {
    -      /* This RDD is not to be cached in memory, so we can just pass the 
computed values
    +      level: StorageLevel,
    +      updatedBlocks: ArrayBuffer[(BlockId, BlockStatus)],
    +      effectiveStorageLevel: Option[StorageLevel] = None): Iterator[T] = {
    +
    +    val putLevel = effectiveStorageLevel.getOrElse(level)
    +    if (!putLevel.useMemory) {
    +      /*
    +       * This RDD is not to be cached in memory, so we can just pass the 
computed values
            * as an iterator directly to the BlockManager, rather than first 
fully unrolling
            * it in memory. The latter option potentially uses much more memory 
and risks OOM
    --- End diff --
    
    This doc is a bit outdated now (we no longer risk OOM ideally). I think 
it's a bit overly complicated anyways. I'd just make it something like:
    
    ```
    // This RDD is not being cached in memory, so pass an iterator directly to 
the block manager.
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to