[ 
https://issues.apache.org/jira/browse/SPARK-4189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230579#comment-14230579
 ] 

Ilya Ganelin commented on SPARK-4189:
-------------------------------------

Looking at the code I see 
 // Just copy the buffer if it's sufficiently small, as memory mapping has a 
high overhead.
      if (length < conf.memoryMapBytes())

That resolves to :
/**
   * Minimum size of a block that we should start using memory map rather than 
reading in through
   * normal IO operations. This prevents Spark from memory mapping very small 
blocks. In general,
   * memory mapping has high overhead for blocks close to or below the page 
size of the OS.
   */
  public int memoryMapBytes() {
    return conf.getInt("spark.storage.memoryMapThreshold", 2 * 1024 * 1024);
  }

Are you proposing that this class has its own configuration parameter versus 
using the existing one?



> FileSegmentManagedBuffer should have a configurable memory map threshold
> ------------------------------------------------------------------------
>
>                 Key: SPARK-4189
>                 URL: https://issues.apache.org/jira/browse/SPARK-4189
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.2.0
>            Reporter: Aaron Davidson
>
> One size does not fit all, it would be useful if there was a configuration to 
> change the threshold at which we memory map shuffle files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to