yunfengzhou-hub opened a new pull request, #20752:
URL: https://github.com/apache/flink/pull/20752
## What is the purpose of the change
#20275 has improved the checkpoint behavior of `OperatorCoordinator`,
guaranteeing the distributed consistency of operator events sent from the
coordinator to its subtasks. However, while the previous implementation only
requires that there is no concurrent "triggering phase" of checkpoints, the
implementation in #20275 requires that before the previous checkpoint is fully
completed or aborted, the next checkpoint should not be started. This has
limited the usage of coordinators in cases when savepoints are triggered or the
maximum allowed concurrent checkpoints is larger than 1.
This PR solves the problem above by adding concurrent checkpoint support in
the coordinator's implementation.
## Brief change log
- The subtask gateway can cache events for multiple ongoing checkpoints.
## Verifying this change
This change added tests and can be verified as follows:
- Added integration tests to verify that coordinator can work correctly
without message loss when the concurrent checkpoint is enforced.
## Does this pull request potentially affect one of the following parts:
- Dependencies (does it add or upgrade a dependency): (no)
- The public API, i.e., is any changed class annotated with
`@Public(Evolving)`: (no)
- The serializers: (no)
- The runtime per-record code paths (performance sensitive): (no)
- Anything that affects deployment or recovery: JobManager (and its
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes)
- This PR affects the behavior of the coordinator in case of concurrent
checkpoints, as described above.
- The S3 file system connector: (no)
## Documentation
- Does this pull request introduce a new feature? (no)
- If yes, how is the feature documented? (not applicable)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]