wsry opened a new pull request, #20834:
URL: https://github.com/apache/flink/pull/20834

   ## What is the purpose of the change
   
   After [FLINK-28663](https://issues.apache.org/jira/browse/FLINK-28663), one 
intermediate dataset can be consumed by multiple consumers, there is a case 
where one vertex can consume one intermediate dataset multiple times. However, 
currently in fine-grained resource mode, when computing the required network 
buffer size, the intermediate dataset is used as key to record the size of 
network buffer per input gate, which means it may allocate less network buffers 
than needed if two input gate of the same vertex consumes the same intermediate 
dataset. This patch tries to fix the issue.
   
   ## Brief change log
   
     - Fix the network memory size calculation issue in fine-grained resource 
mode
     - Add test case to cover the scenario
   
   ## Verifying this change
   
   This change added tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes / **no**)
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
     - The serializers: (yes / **no** / don't know)
     - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
     - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
     - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (yes / **no**)
     - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to