Github user andrewor14 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/9084#discussion_r41827532
  
    --- Diff: 
core/src/main/scala/org/apache/spark/shuffle/ShuffleMemoryManager.scala ---
    @@ -73,15 +74,18 @@ class ShuffleMemoryManager protected (
         // of active tasks, to let other tasks ramp down their memory in calls 
to tryToAcquire
         if (!taskMemory.contains(taskAttemptId)) {
           taskMemory(taskAttemptId) = 0L
    -      notifyAll()  // Will later cause waiting tasks to wake up and check 
numTasks again
    +      // This will later cause waiting tasks to wake up and check numTasks 
again
    +      memoryManager.notifyAll()
         }
     
         // Keep looping until we're either sure that we don't want to grant 
this request (because this
         // task would have more than 1 / numActiveTasks of the memory) or we 
have enough free
         // memory to give it (we always let each task get at least 1 / (2 * 
numActiveTasks)).
    +    // TODO: simplify this to limit each task to its own slot
    --- End diff --
    
    no, this is largely orthogonal to the memory management task. Limiting a 
task to its slot affects both legacy and unified mode. The motivation for doing 
this is to simplify the logic here, so we shouldn't bend ourselves backwards to 
try to exactly preserve the legacy behavior.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to