wsry opened a new pull request, #20350:
URL: https://github.com/apache/flink/pull/20350

   ## What is the purpose of the change
   
   Currently, one intermediate dataset can only be consumed by one downstream 
consumer vertex. If there are multiple consumer vertices consuming the same 
output of the same upstream vertex, multiple intermediate datasets will be 
produced. We can optimize this behavior to produce only one intermediate 
dataset which can be shared by multiple consumer vertices. As the first step, 
we should allow multiple downstream consumer job vertices sharing the same 
intermediate dataset at scheduler side. (Note that this optimization only works 
for blocking shuffle because pipelined shuffle result partition can not be 
consumed multiple times)
   
   ## Brief change log
   
     - Allow multiple downstream consumer job vertices sharing the same 
intermediate dataset at scheduler side
   
   
   ## Verifying this change
   
   This change added tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes / **no**)
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
     - The serializers: (yes / **no** / don't know)
     - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
     - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
     - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (yes / **no**)
     - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to