Github user andrewor14 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10240#discussion_r47282031
  
    --- Diff: 
core/src/main/scala/org/apache/spark/memory/ExecutionMemoryPool.scala ---
    @@ -70,11 +70,28 @@ private[memory] class ExecutionMemoryPool(
        * active tasks) before it is forced to spill. This can happen if the 
number of tasks increase
        * but an older task had a lot of memory already.
        *
    +   * @param numBytes number of bytes to acquire
    +   * @param taskAttemptId the task attempt acquiring memory
    +   * @param maybeGrowPool a callback that potentially grows the size of 
this pool. It takes in
    +   *                      one parameter (Long) that represents the desired 
amount of memory by
    +   *                      which this pool should be expanded.
    +   * @param computeMaxPoolSize a callback that returns the maximum 
allowable size of this pool
    --- End diff --
    
    no, because if storage memory used is below a certain mark (default 0.5 of 
max memory) then it cannot be evicted. In this case the max pool size depends 
on how much unevictable storage memory there is.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to