[
https://issues.apache.org/jira/browse/FLINK-10331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618757#comment-16618757
]
ASF GitHub Bot commented on FLINK-10331:
----------------------------------------
NicoK commented on a change in pull request #6692: [FLINK-10331][network]
reduce unnecesary flushing
URL: https://github.com/apache/flink/pull/6692#discussion_r218357947
##########
File path:
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/PipelinedSubpartition.java
##########
@@ -279,6 +281,21 @@ public int unsynchronizedGetNumberOfQueuedBuffers() {
return Math.max(buffers.size(), 0);
}
+ @Override
+ public void flush() {
+ synchronized (buffers) {
+ if (buffers.isEmpty()) {
+ return;
+ }
+ if (!flushRequested) {
+ flushRequested = true; // set this before the
notification!
+ if (buffers.size() == 1) {
Review comment:
sure - I think, I explained the flushing behaviour in the class' JavaDoc
> Whenever {@link #add(BufferConsumer)} adds a finished {@link
BufferConsumer} or a second {@link BufferConsumer} (in which case we will
assume the first one finished), we will {@link
PipelinedSubpartitionView#notifyDataAvailable() notify} a read view created via
{@link #createReadView(BufferAvailabilityListener)} of new data availability.
Except by calling {@link #flush()} explicitly, we always only notify when the
first finished buffer turns up and then, the reader has to drain the buffers
via {@link #pollBuffer()} until its return value shows no more buffers being
available.
But it doesn't hurt to have something small / more explicit here as well
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> Fix unnecessary flush requests to the network stack
> ---------------------------------------------------
>
> Key: FLINK-10331
> URL: https://issues.apache.org/jira/browse/FLINK-10331
> Project: Flink
> Issue Type: Sub-task
> Components: Network
> Affects Versions: 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.6.0, 1.7.0
> Reporter: Nico Kruber
> Assignee: Nico Kruber
> Priority: Major
> Labels: pull-request-available
>
> With the re-design of the record writer interaction with the
> result(sub)partitions, flush requests can currently pile up in these
> scenarios:
> - a previous flush request has not been completely handled yet and/or is
> still enqueued or
> - the network stack is still polling from this subpartition and doesn't need
> a new notification
> These lead to increased notifications in low latency settings (low output
> flusher intervals) which can be avoided.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)