Github user JoshRosen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10748#discussion_r49678166
  
    --- Diff: core/src/main/scala/org/apache/spark/CacheManager.scala ---
    @@ -126,67 +136,4 @@ private[spark] class CacheManager(blockManager: 
BlockManager) extends Logging {
           }
         }
       }
    -
    -  /**
    -   * Cache the values of a partition, keeping track of any updates in the 
storage statuses of
    -   * other blocks along the way.
    -   *
    -   * The effective storage level refers to the level that actually 
specifies BlockManager put
    -   * behavior, not the level originally specified by the user. This is 
mainly for forcing a
    -   * MEMORY_AND_DISK partition to disk if there is not enough room to 
unroll the partition,
    -   * while preserving the the original semantics of the RDD as specified 
by the application.
    -   */
    -  private def putInBlockManager[T](
    -      key: BlockId,
    -      values: Iterator[T],
    -      level: StorageLevel,
    -      updatedBlocks: ArrayBuffer[(BlockId, BlockStatus)],
    -      effectiveStorageLevel: Option[StorageLevel] = None): Iterator[T] = {
    -
    -    val putLevel = effectiveStorageLevel.getOrElse(level)
    -    if (!putLevel.useMemory) {
    -      /*
    -       * This RDD is not to be cached in memory, so we can just pass the 
computed values as an
    -       * iterator directly to the BlockManager rather than first fully 
unrolling it in memory.
    -       */
    -      updatedBlocks ++=
    -        blockManager.putIterator(key, values, level, tellMaster = true, 
effectiveStorageLevel)
    -      blockManager.get(key) match {
    -        case Some(v) => v.data.asInstanceOf[Iterator[T]]
    -        case None =>
    -          logInfo(s"Failure to store $key")
    -          throw new BlockException(key, s"Block manager failed to return 
cached value for $key!")
    -      }
    -    } else {
    -      /*
    -       * This RDD is to be cached in memory. In this case we cannot pass 
the computed values
    -       * to the BlockManager as an iterator and expect to read it back 
later. This is because
    -       * we may end up dropping a partition from memory store before 
getting it back.
    -       *
    -       * In addition, we must be careful to not unroll the entire 
partition in memory at once.
    -       * Otherwise, we may cause an OOM exception if the JVM does not have 
enough space for this
    -       * single partition. Instead, we unroll the values cautiously, 
potentially aborting and
    -       * dropping the partition to disk if applicable.
    -       */
    -      blockManager.memoryStore.unrollSafely(key, values, updatedBlocks) 
match {
    -        case Left(arr) =>
    -          // We have successfully unrolled the entire partition, so cache 
it in memory
    -          updatedBlocks ++=
    -            blockManager.putArray(key, arr, level, tellMaster = true, 
effectiveStorageLevel)
    --- End diff --
    
    The `BlockManager.putArray()` method was only called from here, hence the 
cleanup of those methods.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to