wsry commented on a change in pull request #11877:
URL: https://github.com/apache/flink/pull/11877#discussion_r666723543



##########
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/PipelinedSubpartition.java
##########
@@ -312,6 +323,16 @@ BufferAndBacklog pollBuffer() {
                     decreaseBuffersInBacklogUnsafe(bufferConsumer.isBuffer());
                 }
 
+                // if we have an empty finished buffer and the exclusive 
credit is 0, we just return
+                // the empty buffer so that the downstream task can release 
the allocated credit for
+                // this empty buffer, this happens in two main scenarios 
currently:
+                // 1. all data of a buffer builder has been read and after 
that the buffer builder
+                // is finished
+                // 2. in approximate recovery mode, a partial record takes a 
whole buffer builder
+                if (buffersPerChannel == 0 && bufferConsumer.isFinished()) {
+                    break;
+                }
+

Review comment:
       Currently, I can only come up with the following way, which depends on 
the downstream to reset the available credit of the upstream. This at least 
needs to add a special network message and propagating this message can incur 
extra overhead. If you think this is really important, I will spend some time 
to rethink about it and see if I can find a better way to solve it.
   
   > Maybe one way is to not sending any data out after sending a buffer with 0 
backlog at sender side, then the receivers clear all floating credits and send 
a reset message to the senders. Then the senders reset all available credits. 
This process is similar to the channel blocking and resumption.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to