Github user mateiz commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1165#discussion_r15147427
  
    --- Diff: core/src/main/scala/org/apache/spark/CacheManager.scala ---
    @@ -140,14 +145,36 @@ private[spark] class CacheManager(blockManager: 
BlockManager) extends Logging {
               throw new BlockException(key, s"Block manager failed to return 
cached value for $key!")
           }
         } else {
    -      /* This RDD is to be cached in memory. In this case we cannot pass 
the computed values
    +      /*
    +       * This RDD is to be cached in memory. In this case we cannot pass 
the computed values
            * to the BlockManager as an iterator and expect to read it back 
later. This is because
    -       * we may end up dropping a partition from memory store before 
getting it back, e.g.
    -       * when the entirety of the RDD does not fit in memory. */
    -      val elements = new ArrayBuffer[Any]
    -      elements ++= values
    -      updatedBlocks ++= blockManager.put(key, elements, storageLevel, 
tellMaster = true)
    -      elements.iterator.asInstanceOf[Iterator[T]]
    +       * we may end up dropping a partition from memory store before 
getting it back.
    +       *
    +       * In addition, we must be careful to not unroll the entire 
partition in memory at once.
    +       * Otherwise, we may cause an OOM exception if the JVM does not have 
enough space for this
    +       * single partition. Instead, we unroll the values cautiously, 
potentially aborting and
    +       * dropping the partition to disk if applicable.
    +       */
    +      blockManager.memoryStore.unrollSafely(key, values, updatedBlocks) 
match {
    +        case Left(arr) =>
    +          // We have successfully unrolled the entire partition, so cache 
it in memory
    +          updatedBlocks ++=
    +            blockManager.putArray(key, arr, level, tellMaster = true, 
effectiveStorageLevel)
    +          arr.iterator.asInstanceOf[Iterator[T]]
    +        case Right(it) =>
    +          // There is not enough space to cache this partition in memory
    +          logWarning(s"Not enough space to cache $key in memory! " +
    +            s"Free memory is ${blockManager.memoryStore.freeMemory} 
bytes.")
    +          var returnValues = it.asInstanceOf[Iterator[T]]
    +          if (putLevel.useDisk) {
    +            logWarning(s"Persisting $key to disk instead.")
    +            val diskOnlyLevel = StorageLevel(useDisk = true, useMemory = 
false,
    +              useOffHeap = false, deserialized = false, 
putLevel.replication)
    +            returnValues =
    +              putInBlockManager[T](key, returnValues, level, 
updatedBlocks, Some(diskOnlyLevel))
    +          }
    +          returnValues
    +      }
    --- End diff --
    
    I wonder if we could move this logic to BlockManager later. Probably can be 
part of another PR, but it's pretty complicated to have this here and then 
similar logic in there when it's doing a get (not to mention in its put 
methods).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to