[ https://issues.apache.org/jira/browse/FLINK-16645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068432#comment-17068432 ]
Piotr Nowojski commented on FLINK-16645: ---------------------------------------- Yes, it makes sense. I think the crucial points is how to avoid unnecessary checks, unnecessary future compositions and generally speaking how to implement this check efficiently, without affecting performance. Setting this {{availabilityHelper}} as not available is probably easy (once some sub-partition exceeds max backlog), however it will be a bit tricky to decide when to mark it available again. For example if there is a {{flatMap}} operator, arbitrarily many records and buffers can be enqueued before anyone is able to check and respect {{getAvailableFuture()}} status, which means many sub-partitions might have exceeded the max backlog. But that can probably be dealt with some counter of subpartitions that exceeded the max backlog. > Limit the maximum backlogs in subpartitions for data skew case > -------------------------------------------------------------- > > Key: FLINK-16645 > URL: https://issues.apache.org/jira/browse/FLINK-16645 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Network > Reporter: Zhijiang > Assignee: Jiayi Liao > Priority: Major > Fix For: 1.11.0 > > > In the case of data skew, most of the buffers in partition's LocalBufferPool > are probably requested away and accumulated in certain subpartition, which > would increase in-flight data to slow down the barrier alignment. > We can set up a proper config to control how many backlogs are allowed for > one subpartition. If one subpartition reaches this threshold, it will make > the buffer pool unavailable which blocks task processing continuously. Then > we can reduce the in-flight data for speeding up checkpoint process a bit and > not impact on the performance. -- This message was sent by Atlassian Jira (v8.3.4#803005)