yjshen commented on a change in pull request #1921:
URL: https://github.com/apache/arrow-datafusion/pull/1921#discussion_r819384921



##########
File path: datafusion/src/execution/memory_manager.rs
##########
@@ -340,7 +341,13 @@ impl MemoryManager {
             } else if current < min_per_rqt {
                 // if we cannot acquire at lease 1/2n memory, just wait for 
others
                 // to spill instead spill self frequently with limited total 
mem
-                self.cv.wait(&mut rqt_current_used);
+                let timeout = self
+                    .cv
+                    .wait_for(&mut rqt_current_used, Duration::from_secs(5));

Review comment:
       It seems Spark is still taking [infinite 
wait](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/memory/ExecutionMemoryPool.scala#L140)
 in the 1/2n situation. Do you think we will have some new cases to deal with 
or limitations in the current design? I'm not implying we should strictly 
follow Spark's way, since the model is different (such as we forbid triggering 
others to spill, and we tries to share memory similarly among all consumers), 
but since you are a Spark committer, I might be asking the right person ☺️




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to