Github user liyezhang556520 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1165#discussion_r21142893
  
    --- Diff: core/src/main/scala/org/apache/spark/CacheManager.scala ---
    @@ -118,21 +118,29 @@ private[spark] class CacheManager(blockManager: 
BlockManager) extends Logging {
       }
     
       /**
    -   * Cache the values of a partition, keeping track of any updates in the 
storage statuses
    -   * of other blocks along the way.
    +   * Cache the values of a partition, keeping track of any updates in the 
storage statuses of
    +   * other blocks along the way.
    +   *
    +   * The effective storage level refers to the level that actually 
specifies BlockManager put
    +   * behavior, not the level originally specified by the user. This is 
mainly for forcing a
    +   * MEMORY_AND_DISK partition to disk if there is not enough room to 
unroll the partition,
    +   * while preserving the the original semantics of the RDD as specified 
by the application.
        */
       private def putInBlockManager[T](
           key: BlockId,
           values: Iterator[T],
    -      storageLevel: StorageLevel,
    -      updatedBlocks: ArrayBuffer[(BlockId, BlockStatus)]): Iterator[T] = {
    -
    -    if (!storageLevel.useMemory) {
    -      /* This RDD is not to be cached in memory, so we can just pass the 
computed values
    -       * as an iterator directly to the BlockManager, rather than first 
fully unrolling
    -       * it in memory. The latter option potentially uses much more memory 
and risks OOM
    -       * exceptions that can be avoided. */
    -      updatedBlocks ++= blockManager.put(key, values, storageLevel, 
tellMaster = true)
    +      level: StorageLevel,
    +      updatedBlocks: ArrayBuffer[(BlockId, BlockStatus)],
    +      effectiveStorageLevel: Option[StorageLevel] = None): Iterator[T] = {
    --- End diff --
    
    @andrewor14 , since the `effectiveStorageLevel` only used for forcing a 
MEM_AND_DISK partition to disk in this patch, is it better not to expose this 
parameter here and in `BlockManager.putBytes/putArray/putIterator`. Because if 
someone explicitly give an `effectiveStorageLevel` by calling 
`BlockManager.putBytes` directly for example 
[PR#3534](https://github.com/apache/spark/pull/3534), and the 
effectiveStoragelLevel  might be different from the original `level` (cache 
levels and replication), this will lead to some wrong messages on webUI, 
because the BlockManager is not aware of this.
    
    Take another case for example, if original Level is MEM_ONLY, and the user 
force it to cache on disk via effectiveStorageLevel, then the BlockManager will 
never know the block is on disk, and the block will be a zombie. Of course , 
such case would not likely happen. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to