wsry opened a new pull request, #20351:
URL: https://github.com/apache/flink/pull/20351
## What is the purpose of the change
Currently, if one output of an upstream job vertex is consumed by multiple
downstream job vertices, the upstream vertex will produce multiple dataset. For
blocking shuffle, it means serialize and persist the same data multiple times.
This ticket aims to optimize this behavior and make the upstream job vertex
produce one dataset which will be read by multiple downstream vertex.
## Brief change log
- Produce one intermediate dataset for multiple consumer job vertices
consuming the same data.
## Verifying this change
This change added tests.
## Does this pull request potentially affect one of the following parts:
- Dependencies (does it add or upgrade a dependency): (yes / **no**)
- The public API, i.e., is any changed class annotated with
`@Public(Evolving)`: (yes / **no**)
- The serializers: (yes / **no** / don't know)
- The runtime per-record code paths (performance sensitive): (yes / **no**
/ don't know)
- Anything that affects deployment or recovery: JobManager (and its
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't
know)
- The S3 file system connector: (yes / **no** / don't know)
## Documentation
- Does this pull request introduce a new feature? (yes / **no**)
- If yes, how is the feature documented? (**not applicable** / docs /
JavaDocs / not documented)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]