twalthr opened a new pull request, #27066:
URL: https://github.com/apache/flink/pull/27066
## What is the purpose of the change
Both StatusWatermarkValve and CombinedWatermarkStatus derive watermarks and
deal with idle inputs (from input channel or input split respectively).
However, cases where all inputs are marked as idle are handled differently. In
both cases, the maximum watermark should be derived from all idle inputs.
If all splits are idle, we should flush all watermarks, which effectively
means emitting the maximum watermark.
Otherwise, there could be a race condition between splits when idleness is
triggered.
E.g., split 1 of 2 emits 5 and goes into idle, split 2 of 2 emits 4 and goes
into idle. If split 2 is idle first, watermark 5 wins. If split 1 is idle
first, watermark 4 wins. But if both are idle, we should conclude on 5.
## Brief change log
- Update CombinedWatermarkStatus and the idleness emission afterwards
## Verifying this change
This change is already covered by existing tests in
WatermarkOutputMultiplexerTest.
Additional tests have been added:
-
WatermarkOutputMultiplexerTest.whenAllImmediateOutputsBecomeIdleWatermarkAdvances
-
WatermarkOutputMultiplexerTest.whenAllDeferredOutputsEmitAndIdleWatermarkAdvances
## Does this pull request potentially affect one of the following parts:
- Dependencies (does it add or upgrade a dependency): no
- The public API, i.e., is any changed class annotated with
`@Public(Evolving)`: no
- The serializers: no
- The runtime per-record code paths (performance sensitive): yes
- Anything that affects deployment or recovery: JobManager (and its
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
- The S3 file system connector: no
## Documentation
- Does this pull request introduce a new feature? no
- If yes, how is the feature documented? JavaDocs
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]