Hi Sergio,
Thanks for driving FLIP-504 forward. This is great and the top most need
for Flink B/G cutovers. I am evaluating Phase 2 against our current
production use cases, where zero-loss/zero-duplication during cutover is a
hard requirement. I would appreciate a few clarifications on the intended
implementation semantics.

   1. For multi-source jobs, if watermarks diverge, what is the intended
   cutover rule (min-watermark, per-input readiness, or other)?
   2. How should idleness/temporary lag on one source affect transition
   readiness to avoid loss?
   3. For mixed source types/event-time quality, what behavior is supported
   vs out of scope in Phase 2?
   4. For “no-dup/no-loss” expectations, what sink guarantees are assumed
   (idempotent/transactional required or recommended)? For Kafka sinks
   specifically, should EOS transactional mode be treated as the primary
   backstop?
   5. Is there a minimal conformance test matrix planned (divergence,
   idleness, failover, duplicate/loss verification)? Any guidance here would
   really help teams roll out safely with clear correctness boundaries.

Thanks,
Vamshi

Reply via email to