Github user JoshRosen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10748#discussion_r49772294
  
    --- Diff: core/src/main/scala/org/apache/spark/CacheManager.scala ---
    @@ -126,67 +136,4 @@ private[spark] class CacheManager(blockManager: 
BlockManager) extends Logging {
           }
         }
       }
    -
    -  /**
    -   * Cache the values of a partition, keeping track of any updates in the 
storage statuses of
    -   * other blocks along the way.
    -   *
    -   * The effective storage level refers to the level that actually 
specifies BlockManager put
    -   * behavior, not the level originally specified by the user. This is 
mainly for forcing a
    -   * MEMORY_AND_DISK partition to disk if there is not enough room to 
unroll the partition,
    -   * while preserving the the original semantics of the RDD as specified 
by the application.
    -   */
    -  private def putInBlockManager[T](
    -      key: BlockId,
    -      values: Iterator[T],
    -      level: StorageLevel,
    -      updatedBlocks: ArrayBuffer[(BlockId, BlockStatus)],
    -      effectiveStorageLevel: Option[StorageLevel] = None): Iterator[T] = {
    -
    -    val putLevel = effectiveStorageLevel.getOrElse(level)
    -    if (!putLevel.useMemory) {
    -      /*
    -       * This RDD is not to be cached in memory, so we can just pass the 
computed values as an
    -       * iterator directly to the BlockManager rather than first fully 
unrolling it in memory.
    -       */
    -      updatedBlocks ++=
    -        blockManager.putIterator(key, values, level, tellMaster = true, 
effectiveStorageLevel)
    -      blockManager.get(key) match {
    -        case Some(v) => v.data.asInstanceOf[Iterator[T]]
    -        case None =>
    -          logInfo(s"Failure to store $key")
    -          throw new BlockException(key, s"Block manager failed to return 
cached value for $key!")
    -      }
    -    } else {
    -      /*
    -       * This RDD is to be cached in memory. In this case we cannot pass 
the computed values
    -       * to the BlockManager as an iterator and expect to read it back 
later. This is because
    -       * we may end up dropping a partition from memory store before 
getting it back.
    --- End diff --
    
    This problem can be addressed via my other patch for locking in the block 
manager: we can have a put() implicitly retain a lock to the block which was 
just stored.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to