Github user pwendell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1165#discussion_r15271976
  
    --- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
    @@ -141,6 +188,88 @@ private class MemoryStore(blockManager: BlockManager, 
maxMemory: Long)
       }
     
       /**
    +   * Unroll the given block in memory safely.
    +   *
    +   * The safety of this operation refers to avoiding potential OOM 
exceptions caused by
    +   * unrolling the entirety of the block in memory at once. This is 
achieved by periodically
    +   * checking whether the memory restrictions for unrolling blocks are 
still satisfied,
    +   * stopping immediately if not. This check is a safeguard against the 
scenario in which
    +   * there is not enough free memory to accommodate the entirety of a 
single block.
    +   *
    +   * This method returns either an array with the contents of the entire 
block or an iterator
    +   * containing the values of the block (if the array would have exceeded 
available memory).
    +   */
    +  def unrollSafely(
    +      blockId: BlockId,
    +      values: Iterator[Any],
    +      droppedBlocks: ArrayBuffer[(BlockId, BlockStatus)])
    +    : Either[Array[Any], Iterator[Any]] = {
    +
    +    var count = 0                     // The number of elements unrolled 
so far
    +    var atMemoryLimit = false         // Whether we are at the memory 
limit for unrolling blocks
    +    var previousSize = 0L             // Previous estimate of the size of 
our buffer
    +    val memoryRequestPeriod = 1000    // How frequently we request more 
memory for our buffer
    +    val memoryRequestThreshold = 100  // Before this is exceeded, request 
memory every round
    --- End diff --
    
    I think the use case here is the following, the user is doing something 
that generates a few very large objects (e.g. they are doing a groupBy where 
almost all the data is in a few keys). When this happens, we may OOM within the 
first few elements of the RDD. So the proposal was to check aggressively at the 
beginning. Otherwise, we will OOM on workloads like that. In practice, I've 
seen that happen a lot, especially with `groupBy`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to