Github user ScrapCodes commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2134#discussion_r16832936
  
    --- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
    @@ -200,81 +248,118 @@ private[spark] class MemoryStore(blockManager: 
BlockManager, maxMemory: Long)
        * checking whether the memory restrictions for unrolling blocks are 
still satisfied,
        * stopping immediately if not. This check is a safeguard against the 
scenario in which
        * there is not enough free memory to accommodate the entirety of a 
single block.
    +   * 
    +   * When there is not enough memory for unrolling blocks, old blocks will 
be dropped from
    +   * memory. The dropping operation is in parallel to fully utilized the 
disk throughput
    +   * when there are multiple disks. And befor dropping, each thread will 
mark the old blocks
    +   * that can be dropped.
        *
        * This method returns either an array with the contents of the entire 
block or an iterator
        * containing the values of the block (if the array would have exceeded 
available memory).
        */
    +
       def unrollSafely(
    -      blockId: BlockId,
    -      values: Iterator[Any],
    -      droppedBlocks: ArrayBuffer[(BlockId, BlockStatus)])
    -    : Either[Array[Any], Iterator[Any]] = {
    +    blockId: BlockId,
    --- End diff --
    
    incorrect indentation.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to