Ngone51 commented on a change in pull request #34186:
URL: https://github.com/apache/spark/pull/34186#discussion_r730878231



##########
File path: core/src/main/java/org/apache/spark/memory/TaskMemoryManager.java
##########
@@ -135,109 +135,117 @@ public TaskMemoryManager(MemoryManager memoryManager, 
long taskAttemptId) {
    *
    * @return number of bytes successfully granted (<= N).
    */
-  public long acquireExecutionMemory(long required, MemoryConsumer consumer) {
+  public long acquireExecutionMemory(long required, MemoryConsumer 
requestingConsumer) {
     assert(required >= 0);
-    assert(consumer != null);
-    MemoryMode mode = consumer.getMode();
+    assert(requestingConsumer != null);
+    MemoryMode mode = requestingConsumer.getMode();
     // If we are allocating Tungsten pages off-heap and receive a request to 
allocate on-heap
     // memory here, then it may not make sense to spill since that would only 
end up freeing
     // off-heap memory. This is subject to change, though, so it may be risky 
to make this
     // optimization now in case we forget to undo it late when making changes.
     synchronized (this) {
+      consumers.add(requestingConsumer);
       long got = memoryManager.acquireExecutionMemory(required, taskAttemptId, 
mode);

Review comment:
       In the beginning, I noticed there's a behavior difference if we still 
adding the request consumer at the end - that is we will not do self-spill on 
the request consumer if this's the first time the consumer calls 
`acquireExecutionMemory`. However, later, I realized that it's actually a no-op 
if we do self-spill on the request consumer at the first time call.  Therefore, 
I agree with you @mridulm . Keeping add the consumer at the end should still be 
fine.
   
    




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to