crepererum commented on code in PR #10026:
URL: 
https://github.com/apache/arrow-datafusion/pull/10026#discussion_r1562272838


##########
datafusion/physical-plan/src/repartition/distributor_channels.rs:
##########
@@ -188,33 +205,48 @@ impl<'a, T> Future for SendFuture<'a, T> {
         let this = &mut *self;
         assert!(this.element.is_some(), "polled ready future");
 
-        let mut guard_channel = this.channel.lock();
-
-        // receiver end still alive?
-        if !guard_channel.recv_alive {
-            return Poll::Ready(Err(SendError(
-                this.element.take().expect("just checked"),
-            )));
-        }
-
-        let mut guard_gate = this.gate.lock();
+        // lock scope
+        let to_wake = {
+            let mut guard_channel_state = this.channel.state.lock();
+
+            let Some(data) = guard_channel_state.data.as_mut() else {
+                // receiver end dead
+                return Poll::Ready(Err(SendError(
+                    this.element.take().expect("just checked"),
+                )));
+            };
+
+            // does ANY receiver need data?
+            // if so, allow sender to create another
+            if this.gate.empty_channels.load(Ordering::SeqCst) == 0 {

Review Comment:
   > However, if this is the case wouldn't it be clearer to not use an atomic 
usize here and just hang the count on channel state?
   
   Reading this size should be cheap (=lockless). If you place that behind a 
mutex, all threads will have a single hot-spot critical section.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to